You are on page 1of 10

Computers in Human Behavior 61 (2016) 522e531

Contents lists available at ScienceDirect

Computers in Human Behavior


journal homepage: www.elsevier.com/locate/comphumbeh

Full length article

Vocational college students' acceptance of web-based summative


listening comprehension test in an EFL course
Harun Cigdem a, *, Mustafa Ozturk b, Abdullah Topcu a
a
Turkish Land Forces Non-Commissioned Officer Vocational College, Balikesir, Turkey
b
Hacettepe University, School of Foreign Languages, Ankara, Turkey

a r t i c l e i n f o a b s t r a c t

Article history: There is a substantial increase in the utilization of web-based assessment procedures to assist teaching
Received 9 December 2015 and learning processes in higher education institutions. Despite their benefits, institutions use them to a
Received in revised form limited extent due to a number of factors influencing both instructors' and students' behaviors. This
20 March 2016
study examines students' acceptance of a web-based summative listening test administered in the 2014
Accepted 22 March 2016
Available online 28 March 2016
e2015 academic year within an ‘English as a Foreign Language' course at a two-year post-secondary
military school in Turkey. The participants consisted of 602 military students. As a model, Computer
Based Assessment Acceptance Model based on Technology Acceptance Model was adopted in order to
Keywords:
Behavioral intention
analyze the participants' perceptions on a web-based summative listening comprehension test. The data
Web based listening test were collected via an online questionnaire. A structural equation modeling analysis was utilized to
Technology acceptance analyze the relationships among factors. The general results showed that perceived ease of use and
Military vocational college perceived playfulness had a direct influence on the participants' behavioral intentions to use web-based
EFL course tests. In other words, web-based tests are expected to be used if they are easy to use and playful enough.
Online testing © 2016 Elsevier Ltd. All rights reserved.

1. Introduction Jeong, 2014).


There have been numerous advantages of WBT for both test-
Seeing that students' frequent use of computers and the Internet developers and test-takers such as flexibility of time and place;
is increasing rapidly, web-based testing (WBT) is also becoming a enhanced resource use; immediate and real-time feedback; high
prevalent system as an alternative to traditional assessment prac- interaction with test-takers; quick results and real-time score re-
tices in all educational settings around the world. WBT is related to ports; automated grading and reporting; easier data management;
the conceptualization and administration of assessments as a so- cost reduction; more productive managing, organizing, and
phisticated way of using web technologies (Cigdem & Oncu, 2015) deploying of exams; time-saving evaluation of learners' strengths
with the aim of expanding educational assessments in universities, and weaknesses; and learners' self-evaluation (Abedi, 2014; Bull &
schools or other industry. WBT's becoming widespread resides in McKenna, 2004; Chou, Moslehpour, & Huyen, 2014; Cigdem & Tan,
the fact that an increasing number of faculty members have started 2014; Llamas-Nistal et al. 2013; Morris, 2008; Terzis, Moridis, &
to realize the convenience of creating, implementing, and man- Economides, 2013; Zakrzewski & Steven, 2000). Along with such
aging assessment as parts of learning management systems advantages, an effective WBT system is required to provide
(Llamas-Nistal, Ferna ndez-Iglesias, Gonzalez-Tato, & Mikic-Fonte, authentic assessment activities and meaningful feedback as well as
2013). WBT is seen as a noteworthy method for instructors, espe- to support multidimensional perspectives (Gikandi, Morrow, &
cially regarding high-stake tests, because it aims to optimize the Davis, 2011).
goals and techniques of teaching and testing in shorter times (Pino- As test developers and test takers are being immersed in WBT
Silva, 2008) and the delivery or administration of the assessment is systems, a greater number of educational researchers tend to work
not supposed to be at a fixed time or place (Cigdem & Oncu, 2015; on the acceptance of WBT systems with the purpose of defining the
variables that might explain the acceptance issue. Users' perceived
acceptance of a technology or behavioral intentions to use a tech-
* Corresponding author. nology have been studied and described by many researchers
E-mail addresses: hcigdem@gmail.com (H. Cigdem), mustafaozturk@hacettepe. previously as in the Theory of Reasoned Action (TRA) (Ajzen &
edu.tr (M. Ozturk), abdullah.topcu@boun.edu.tr (A. Topcu).

http://dx.doi.org/10.1016/j.chb.2016.03.070
0747-5632/© 2016 Elsevier Ltd. All rights reserved.
H. Cigdem et al. / Computers in Human Behavior 61 (2016) 522e531 523

Fishein, 1980); the Theory of Planned Behavior (TPB) (Ajzen, 1991); English is one of the primary policies of many higher education
the Technology Acceptance Model (TAM) (Davis, Bagozzi, & institutions. This fact makes language teaching and testing in
Warshaw, 1989); the Unified Theory of Acceptance and Use of higher education a difficult and demanding task. Although teaching
Technology (UTAUT) (Venkatesh, Morris, Davis, & Davis, 2003); and the skills of reading and writing are managed to some extent in
the Computer Based Assessment Acceptance Model (CBAAM) current implementations, there is still a limited competence in
(Terzis & Economides, 2011). Considering the scope of the current teaching and testing of listening and speaking skills. An important
research, the CBAAM was adopted and studied with the aim of attempt is thought to carry out a computer assisted system to
determining the students' intentions to use a WBT system in the expand teaching and testing of such skills to be able to make it
case of a summative assessment of Listening Comprehension in more practical and widespread. From this point forth, this study
English. will be a good basis for the upcoming value of teaching and testing
of listening comprehension through technology in higher educa-
2. Literature review tion. Although the major contribution of the study was to look into
the variables influencing the students' intentions to use a WBT
As the researchers have been driven to examine the factors that system, this study aimed to contribute to the field through an
affect user's perceptions on the adoption and acceptance of WBT attempt to implement a WBT system within the context of a foreign
systems, there has been a considerable amount of literature por- language course, in particular for an underestimated sub-skill,
traying the application of the CBAAM in various educational con- which is Listening Comprehension. Various forms of WBT systems
texts. To exemplify, Dermo (2009) surveyed 130 undergraduate were adopted previously in various content areas; yet, such studies
students participating in an online (formative/summative) assess- were limited in number and scope for language teaching contexts.
ment system during the 2007e2008 academic year and found that Apart from this, the current research setting, which is a unique
the reliability, security, validity, and accessibility of the system were post-secondary higher education institution among its types, adds
accepted by the participants coming from different academic pro- to the significance of the study.
grams like management, informatics and engineering, life sciences,
social sciences and education, and therefore learning was promoted 4. Research model and hypotheses
among them. As another example, Terzis and Economides (2011)
worked on the perceptions of introductory informatics course Computer Based Assessment Acceptance Model (CBAAM) was
students on the acceptance of WBT. Their results indicated that proposed by Terzis and Economides (2011) on the basis of previous
perceived ease of use and perceived playfulness had a direct effect acceptance models such as TAM, TPB and UTAUT as well as with the
on the use of computer-based assessments, whereas perceived inclusion of additional variables in order to describe behavioral
usefulness had only an indirect effect. In Sorensen's (2013) survey intention to use a WBT system. Eight variables were included
on students' perceptions about e-assessments in a college chem- within the model: perceived usefulness, perceived ease of use,
istry course, the participants thought that e-assessment facilitated computer self-efficacy, social influence, facilitating conditions,
their learning. With that study, more frequent implementations of perceived playfulness, content, and goal expectancy. On the whole,
e-assessment in other courses was also suggested. In a study con- Perceived Usefulness and Perceived Ease of Use were adopted from
ducted with military students at a Mathematics course, it was seen the TAM (Davis, 1989); Computer Self Efficacy was adopted from
that students having computers and the Internet at home as well as Social Cognitive Theory (Compeau & Higgins, 1995); Perceived
prior web-based exam experiences were more optimistic about the Playfulness was adopted from an extended TAM version by Moon
computer-based assessments than the other students (Cigdem & and Kim (2001); and Facilitating Conditions and Social Influence
Tan, 2014). In another survey done with military students, were adopted from the UTAUT (Venkatesh et al., 2003). Apart from
Cigdem and Oncu (2015) investigated perceptions on WBT in a those, two more variables, Goal Expectancy, which is rooted in Self-
computer networks course and found that the contents of the Management of Learning (Wang, Wu, & Wang, 2009) and Content
questions significantly affected perceived usefulness and that were introduced by Terzis and Economides (2011) to the model.
perceived usefulness had a great influence on users' behavioral As illustrated in Fig. 1, the CBAAM suggests that users' intentions
intentions. to use a WBT system is directly linked to perceived ease of use,
All those attempts reveal that an increasing number of educa- perceived playfulness, perceived usefulness, and content of web
tional settings adopt or use WBT systems to deliver and manage based test. Apart from those links, perceived ease of use is defined
their assessment procedures more efficiently and more conve- by computer self-efficacy and facilitating conditions; whereas
niently. Since language testing is one of those settings, this study perceived playfulness is defined by perceived usefulness, goal ex-
investigates the factors that are likely to influence learners’ in- pectancy, and content of web based test. On the other hand,
tentions to use WBT systems in the assessment of their listening perceived usefulness is also influenced by social influence, goal
comprehension. Even though WBT systems are thought to be useful expectancy, and content of web-based test.
in the realm of language testing, whether such systems are
preferred over paper-pencil assessment procedures is not clear 4.1. Perceived playfulness (PP)
enough (Pino-Silva, 2008). Therefore, the current study is thought
to be an important step to look into the perceptions of the students Moon and Kim (2001) offered Perceived Playfulness (PP) in the
on WBT systems when assessing a fundamental skill of language TAM, as a key belief constructed through an individual's personal
education, which is listening comprehension. experience with a system, Perceived Playfulness (PP) is defined as
“the pleasure the individual feels objectively when committing a
3. Significance of the study particular behavior or carrying out a particular activity”. PP, as a
significant variable, is attributed to have a positive impact on
Turkish educational system is trying to raise generations who behavioral intention to accept and use the Internet or a WBT system
can speak at least one foreign language, which is frequently English, (Moon & Kim, 2001; Terzis & Economides, 2011) and determined
to be able to internationalize its institutions and citizens, and thus by the aspects of concentration, curiosity, and enjoyment (Terzis &
promote language teaching at all levels of education, particularly in Economides, 2011). In this framework, it is about the arousal of an
higher education. Having graduates who can understand and speak individual's enjoyment, cognitive curiosity, and concentration
524 H. Cigdem et al. / Computers in Human Behavior 61 (2016) 522e531

Fig. 1. Computer based assessment acceptance model.

during an activity. Our hypothesis for the current concept was as 4.4. Computer self-efficacy (CSE)
follows:
Explained as an individual's feelings of his/her ability to use
H1. PP would have a positive effect on the Behavioral Intention to
computers (Compeau & Higgins, 1995), Computer Self Efficacy
use WBT.
(CSE) was claimed to have a direct effect on PEU and an indirect link
to the behavioral intention (Terzis & Economides, 2011). Building
4.2. Perceived usefulness (PU) on this claim, we also hypothesized:
H7. CSE would have a positive effect on PEU.
As one of the important factors introduced within the TAM,
Perceived Usefulness (PU) is defined as “the level that a person
thinks using a certain system would enhance his/her job perfor- 4.5. Social influence (SI)
mance within an organizational content” (Davis, 1989, p. 320) and
as the degree to which an innovation is perceived as being better Integrated into the CBAAM as a variable from the UTAUT (Teo,
than its precursor (Rogers, 2003). PU was found to be a strong factor 2009; Teo et al., 2008) and used in the LMS acceptance models
directly influencing the behavioral intention to use a technology in frequently (Van Raaij & Schepers, 2008; Wang et al., 2009), Social
various educational research contexts (Cigdem & Oncu, 2015; Ong Influence (SI) is related to individuals' beliefs of how they are
& Lai, 2006; Sun, Tsai, Finger, Chen, & Yeh, 2008; Van Raaij & influenced by the opinions and judgments of their colleagues,
Schepers, 2008; Venkatesh & Davis, 2000). In this sense, a useful friends, family members, and superiors (Fishbein & Ajzen, 1975;
WBT is attributed to increase perceived playfulness (Terzis & Taylor & Todd, 1995; Venkatesh et al., 2003). SI was measured
Economides, 2011). Therefore, we hypothesized: through three key variables: Subjective Norm, Image and Volun-
tariness (Karahanna & Straub, 1999; Venkatesh et al., 2003). SI was
H2. PU would have a positive effect on the Behavioral Intention to
claimed to be one of the major determinants explaining behavioral
use WBT.
intention within certain acceptance models like TAM2 and UTAUT
H3. PU would have a positive effect on PP. and to have a positive effect on PU in LMS and WBT contexts (e.g.
Cigdem & Topcu, 2015; Terzis & Economides, 2011; Wang et al.,
2009). Although a significantly positive effect of SI only on PU
4.3. Perceived ease of use (PEU) was found within the CBAAM, SI was said to determine users'
behavioral intentions indirectly through PU (Terzis & Economides,
Perceived Ease of Use (PEU), which is defined as “the degree to 2011). Our hypothesis for this concept was as follows:
which a person believes that using a particular system would be
free of effort” (Davis, 1989; Rogers, 2003), is the second major H8. SI would have a positive effect on PU.
variable of the TAM (Davis, 1989). A lot of research put forward
evidences on the positive impact of the PEU on behavioral inten-
4.6. Facilitating conditions (FC)
tion, perceived usefulness and perceived playfulness (Terzis &
Economides, 2011; Venkatesh, 1999; Venkatesh & Davis, 1996). In
Facilitating Conditions (FC) was also adopted from the UTAUT
this context, as the level of PEU regarding an e-learning system
(Teo, 2009; Teo et al., 2008) and used in the LMS acceptance models
increases, the acceptance and use of that system by the participants
(Van Raaij & Schepers, 2008; Wang et al., 2009). FC represents a
is most likely to increase (Teo, Lee, & Chai, 2008). In this line, we
variety of factors that possibly influence a person's perceptions to
had the following hypotheses:
execute a course of action and depends on the system itself and its
H4. PEU would have a positive effect on the Behavioral Intention providers. FC might stand for technical support such as helpdesks
to use WBT. and online support services; resource factors such as time and
money (Lu, Liu, Yu, & Wang, 2008); policies, regulations, and legal
H5. PEU would have a positive effect on PU.
environment of a system; communication activities and active
H6. PEU would have a positive effect on PP. participation of organizational staff (Bueno & Salmeron, 2008).
H. Cigdem et al. / Computers in Human Behavior 61 (2016) 522e531 525

Within the scope of the current research, FC was set to cover all the post-secondary military school and with 1986 military students
support provided by the staff during the WBT and hypothesized to from various academic programs. Although all the students
have a positive effect on PEU. participated in the assessments, only 602 of them responded to the
online questionnaire.
H9. FC would have a positive effect on PEU.
Among those 602 students, 65 of them were from the depart-
ment of Computer Technology, 265 from the department of Elec-
4.7. Goal expectancy (GE) tronics and Communication Technologies, 133 from the department
of Business Administration, 33 from the department of Electrics, 70
The significance of the concepts like self-direction, self-man- from the department of Mechatronics, and 36 from the department
agement, self-discipline, goal orientation, and personal outcome of Construction. The participants' age ranged from 17 to 23 with an
expectations (Smith, Murphy, & Mahoney, 2003; Yi & Hwang, average of 19.65.
2003; Shih, 2008) has been mentioned in previous studies. Based The participants had two distinct listening tests. In the first one,
on Self-Management of Learning (Wang et al., 2009), Goal Expec- they were provided with 20 multiple-choice items and the mean
tancy (GE) occurs as a variable influencing individuals' beliefs that value for all the student groups was 62 out of 100. The second one
they are prepared appropriately to use a WBT through two di- included a variety of tasks through 15 items such as drag and drop,
mensions: learners' satisfaction with their preparation for the WBT fill-in-the gaps, matching, and multiple-choice types (see Fig. 2).
and their desirable level of success. Goals, either specific or difficult, The overall score of this test was not as high as the first one (M ¼ 45
serve a purpose towards high performance. Sometimes, greater out 100).
achievements come with more difficult goals. Therefore, it might
not be a good strategy for instructors to include easy assignments to 5.2. Data collection instrument
support learning experiences as loss of interest in the course or
even dropouts might occur as a result (Locke, 1996; Locke & The data were collected quantitatively through an online
Latham, 1990). In a summative assessment within the scope of questionnaire uploaded on the learning management system
the CBAAM, a positive effect of GE on PU and on PP was demon- where the students are supposed to take their end-of-semester
strated. In this framework, our hypotheses were: listening comprehension exams. The questionnaire included 24
five-point Likert-type items which were adapted from the TAM
H10. GE would have a positive effect on PU. literature and administered in Turkish. Table 1 displays the cate-
H11. GE would have a positive effect on PP. gories within the scale and the items within each category except
for the items excluded from the analyses as they indicated poor
factorial loadings (i.e. <0.70). One item related to computer self-
4.8. Content (C) efficacy (CSE4), one item related to perceived playfulness (PP3),
two items related to social influence (SI3 and SI4), and two items
Students are able to learn concepts and improve their perfor- related to content (C3 and C4) were excluded from the list.
mances by answering sample practice questions on each concept
and taking continuous feedback on their responses (Moreno & 5.3. Data analysis
Mayer, 2007; Tennyson, 1980; Tennyson & Buttrey, 1980).
Learning concepts could depend on learners' attributes and their As an initial step the sample size was checked. Taking a refer-
prior knowledge (Tennyson & Park, 1980). Referring to the learners' ence from the suggestion that a sample of more than 300 was set to
conceptualizations of the content of the WBT, Content (C) was also be satisfactory (Hair, Black, Babin, & Anderson, 2010); the sample
adapted as a variable from previous studies to fit in the WBT context size of the current study was far beyond the requirements. As for
(Shee & Wang, 2008; Wang, 2003). The items of WBT system were the actual analyses, a two-step strategy of structural equation
based on EFL course content. If items in WBT were clear, under- modeling (SEM) was performed in this study. Firstly, the method of
standable and relevant to the course content, then it was more maximum likelihood (ML) was employed in the Analysis of
likely to expect utility and satisfaction by students (Terzis & Moment Structures (AMOS) software. Next, a confirmatory factor
Economides, 2011). It was indicated that C could influence analysis (CFA) was performed to validate the quality of proposed
perceived usefulness (Cigdem & Oncu, 2015), goal expectancy, measures, and then a SEM was conducted to validate the entire
perceived playfulness, and behavioral intention to use a WBT sys- model in the AMOS.
tem (Terzis & Economides, 2011). Besides, it appeared as a signifi-
cant variable to determine e-learners' satisfaction (Wang, 2003). In 6. Results
the light of all those points, we had the following hypotheses
regarding the Content: 6.1. Validation of the data collection tool

H12. C would have a positive effect on PU. In order to check the suitability of the dataset for factor analysis,
H13. C would have a positive effect on PP. KaisereMeyereOlkin (KMO) coefficients on the scale were
computed as a value of 0.92 which indicated that factors would be
H14. C would have a positive effect on GE. reliable and the scale would be suitable for factor analysis (Field,
H15. C would have a positive effect on the Behavioral Intention to 2000). Later, a CFA based on the AMOS 21 was performed to vali-
use WBT. date and confirm the research model. According to Hair et al.'s
(2010) recommendations, c2/df < 5 was set as the acceptable
level for good model fit. Besides, other multiple indicators were
5. Method used to obtain a more objective conclusion with the intention of
avoiding power problems by using the Chi-square test in a large
5.1. Research context and participants sample. Based on the present data, the modification indices for
covariances, which are available through the AMOS, suggested a
This study was conducted in a compulsory EFL course taught at a linkage between the two observed variables as shown in Fig. 3. The
526 H. Cigdem et al. / Computers in Human Behavior 61 (2016) 522e531

Fig. 2. A screenshot of computer based listening exam.

Table 1
Items and constructs.

Constructs Items Mean SD Factor loading

Perceived Usefulness (PU) (Davis, 1989), a ¼ .921 1.78 0.97


PU1 Using the Computer Based Assessment (CBA) will improve my work. 1.74 0.990 0.91
PU2 Using the CBA will enhance my effectiveness. 1.78 1.025 0.90
PU3 Using the CBA will increase my productivity. 1.82 1.103 0.88

Perceived Ease of Use (PEOU) (Davis, 1989), a ¼ .882 2.34 1.23


PEOU1 My interaction with the system is clear and understandable. 2.14 1.336 0.90
PEOU2 It is easy for me to become skillful at using the system. 2.18 1.357 0.89
PEOU3 I find the system easy to use. 2.70 1.425 0.75

Computer Self Efficacy (CSE) (Compeau & Higgins, 1995), a ¼ .853 3.42 1.08
CSE1 I could complete a job or task using the computer. 3.40 1.241 0.87
CSE2 I could complete a job or task using the computer if someone showed how to do it first. 3.59 1.265 0.84
CSE3 I can navigate easily through the Web to find any information I need. 3.29 1.210 0.72

Social Influence (Venkatesh et al., 2003), a ¼ .861 2.27 1.20


SI1 People who influence my behavior think that I should use CBA. 2.27 1.264 0.84
SI2 People who are important to me think that I should use CBA. 2.27 1.310 0.90

Facilitating Conditions (Thompson, Higgins, & Howell, 1991), a ¼ .803 2.56 1.27
FC1 When I need help to use the CBA, someone is there to help me. 2.54 1.407 0.81
FC2 When I need help to learn to use the CBA, system's help support is there to teach me. 2.58 1.387 0.83

Content (Terzis & Economides, 2011), a ¼ .824 1.58 0.91


C1 CBA's questions were clear and understandable. 1.66 1.055 0.84
C2 CBA's questions were easy to answer. 1.50 0.923 0.84

Goal Expectancy (Terzis & Economides, 2011), a ¼ .791 2.61 1.177


GE1 Courses' preparation was sufficient for the CBA 2.35 1.338 0.74
GE2 My personal preparation for the CBA. 2.67 1.383 0.80
GE3 My performance expectations for the CBA. 2.83 1.481 0.72

Perceived Playfulness (Moon & Kim, 2001), a ¼ .896 1.69 1.33


PP1 Using CBA keeps me happy for my task. 1.68 1.004 0.92
PP2 Using CBA gives me enjoyment for my learning. 1.68 1.010 0.90
PP4 Using CBA will lead to my exploration. 1.72 1.041 0.77

Behavioral Intention to Use CBA (Davis, 1989), a ¼ .925 2.00 1.66


BI1 I intend to use CBA in the future. 1.89 1.242 0.80
BI2 I predict I would use CBA in the future. 2.09 1.290 0.94
BI3 I plan to use CBA in the future. 2.05 1.244 0.96

final model returned c2 ¼ 549.378, df ¼ 215; c2/df ¼ 2.555 with (RMSEA) ¼ 0.051, goodness of fit index (GFI) ¼ 0.932, adjusted GFI
probability level p ¼ 0.00 < 0 .05. Several fit indices are reported (AGFI) ¼ 0.905, in which GFI and AGFI were greater than 0.80 and
here, including the root mean square error of approximation RMSEA was lower than 0.08. All these values suggested that the
H. Cigdem et al. / Computers in Human Behavior 61 (2016) 522e531 527

Fig. 3. Confirmatory factor model.

measurement model fitted the data set well. The final model with Table 2
the estimated factor loadings is illustrated in Fig. 3. Composite reliability, average variance extracted, maximum shared squared vari-
ance, average shared square variance of constructs.
In CFA, Composite Reliability (CR), Average Variance Extracted
(AVE), Maximum Shared Squared Variance (MSV) and Average Constructs CR AVE MSV ASV
Shared Square Variance (ASV) were examined. According to Hair Perceived Usefulness 0.92 0.80 0.61 0.31
et al. (2010), high (above 0.70) CR values indicate a good reli- Perceived Ease of Use 0.89 0.73 0.38 0.26
ability. Table 2 shows that the CR of all the constructs of CBAAM was Computer Self Efficacy 0.86 0.67 0.28 0.15
Social Influence 0.86 0.76 0.27 0.18
found to be higher than 0.70 and the AVE of all the constructs of
Facilitating Conditions 0.80 0.67 0.27 0.16
CBAAM was found to be higher than 0.50. Next, convergent validity Content 0.83 0.71 0.58 0.27
and discriminant validity were examined. AVE being 0.50 or higher Goal Expectancy 0.80 0.57 0.15 0.12
indicated a good convergent validity. By using the methods rec- Perceived Playfulness 0.91 0.78 0.61 0.32
ommended by Fornell and Larcker (1981), the discriminant validity Behavioral Intention 0.93 0.82 0.45 0.24

was also tested; by imputing AMOS's correlations and standardized


528 H. Cigdem et al. / Computers in Human Behavior 61 (2016) 522e531

regression tables into the validity testing tool within the ‘Stats Tools Table 3
Package' (Gaskin, 2012). Results of hypotheses.

Table 2 shows that discriminant validity thresholds (MSV < AVE, Hypothesis Path Path coefficient Results
ASV < AVE), are realized for all constructs, indicating acceptable H1 PP [ BI 0.446* Supported
discriminant validity. H2 PU [ BI 0.192 Not Supported
H3 PU [ PP 0.466* Supported
H4 PEU [ BI 0.167* Supported
6.2. Test of the structural model H5 PEU [ PU 0.300* Supported
H6 PEU [ PP 0.020 Not Supported
H7 CSE [ PEU 0.362* Supported
After ensuring the validity of the constructs within the mea- H8 SI [ PU 0.241* Supported
surement model, the structural model was evaluated. To determine H9 FC [ PEU 0.390* Supported
the relationship of the constructs in the proposed model, the H10 GE [ PU 0.044 Not Supported
structural equation model was tested using the AMOS 21 with the H11 GE [ PP 0.069 Not Supported
H12 C [ PU 0.447* Supported
default maximum likelihood estimation method. The model H13 C [ PP 0.418* Supported
returned c2 ¼ 733.100, df ¼ 230; c2/df ¼ 3.187 with probability H14 C [ GE 0.346* Supported
level p ¼ 0.00 < 0 .05; and RMSEA ¼ 0.060, GFI ¼ 0.908, and H15 C [ BI -0.014 Not Supported
AGFI ¼ 0.880. All fit indices obtained in the present study showed
good structural model fit to the data for the proposed research
model. The resulting parameters of the research model are dis- content seemed to have a direct positive effect on perceived play-
played in Fig. 4. fulness; on the other hand, the effects of perceived ease of use and
Table 3 summarizes the results of our hypotheses. Inspecting the goal expectancy on perceived playfulness were not strong enough
model derived from the analyses, it could be clarified that there to exert statistically significant points. On the side of the perceived
appeared a direct positive effect of perceived playfulness and usefulness, goal expectancy did not create any positive effect; yet,
perceived ease of use on the construct of behavioral intention to use social influence, perceived ease of use, and content each contrib-
the WBT systems; whereas perceived usefulness did not exert a uted to perceived usefulness through a strong impact. Apart from
direct influence. Along with these points, perceived usefulness and

Fig. 4. Result of SEM (standardized estimates).


H. Cigdem et al. / Computers in Human Behavior 61 (2016) 522e531 529

these, computer self-efficacy and facilitating conditions both had a on the usefulness of the system. This finding could have been
direct impact on perceived ease of use. Finally, goal expectancy was resulted from two other reasons. One is that the WBT aimed to test
directly and positively influenced by content. listening comprehension. The students did not have sufficient
awareness of foreign language learning, because they are expected
to be low rank military personnel and not to have duties in inter-
7. Discussion national platforms. Another reason might be that students were
novice in using computer as a tool for an exam. Presenting a
The major contribution of the current study was to look into the consistent result with Terzis, Moridis, and Economides (2012), the
variables influencing the students' intentions to use a WBT system current study did not reveal any significant influence of perceived
in the context of the foreign language education at a two-year post- ease of use on perceived playfulness, which might be discussed
secondary higher education institution. During the study, it was through the characteristics of the contemporary users of such
attempted to implement a WBT system specifically for an under- systems; because they were novice in using computer as an
estimated sub-skill, which is Listening Comprehension. Including assessment tool although they took computer-related courses
audio files made the design of the listening comprehension test previously. The difficulties experienced during the administration
through the WBT system even much more difficult and time- of the computer-based listening assessment might cause a decrease
consuming, compared to other computer-based tests. Students' in the perceived ease of use and perceived playfulness.
perceptions on the WBT systems were also investigated within the In line with the previous studies such as Cigdem and Topcu
scope of this study. To illustrate, the factors within the CBAAM were (2015), Terzis and Economides (2011); Terzis et al. (2012);
analyzed descriptively and it was seen that a great majority of the Venkatesh and Davis (2000), social influence was attributed to be
mean scores were below the value of 3 and ranged from 1.50 to a critical determinant of perceived usefulness. Similar findings
3.59, except for the dimension of computer self-efficacy (see regarding the previous studies might be explained by the social
Table 1). All those values descriptively indicated that the partici- sharing related to the exam format and item types before the exam
pants had generally negative attitudes towards the WBT systems, sessions.
which is apparently an inconsistent finding with some previous In addition to those points, it was seen that the content of the
studies (Cigdem & Oncu, 2015; Cigdem & Tan, 2014; Dermo, 2009; items in the test seemed to affect significantly the perceived use-
Sorensen, 2013). It was deduced that students' getting low marks fulness of the system as in the study of Cigdem and Oncu (2015) and
from the listening comprehension tests administered through the Terzis and Economides (2011), who suggested that users would
WBT system might have led them to perceive the process tend to consider the system as useful when their perceptions about
negatively. the content of the questions in a WBT system were positive. At this
When the relationships between the variables were investi- point, low scores on the content meant that students perceived the
gated through the SEM, an explanatory model (see Fig. 5) was ob- test items harder than they expected and more time-consuming
tained. The general results emerging in the model highlighted than they imagined. However, preparing items that are easier to
perceived playfulness as the most important determinant of the comprehend and handle could probably have a positive impact on
behavioral intention to use the WBT systems and perceived use- the use of a computer-based listening comprehension test.
fulness had only an indirect impact through perceived playfulness.
This finding was in line with the fundamental proposition of the
CBAAM. 8. Conclusion
As a parallel finding with Terzis and Economides (2011), the
results of the current study also indicated that perceived usefulness This study examined the constructs that influence students'
did not exert a direct impact on behavioral intention; however, this behavioral intentions to use WBT in a military vocational college
finding was controversial with the TAM literature, because some and concluded that perceived playfulness and perceived ease of use
prior studies (Cigdem & Oncu, 2015) suggested a very strong effect exerted direct effects on behavioral intention to use WBT. Accord-
of perceived usefulness on behavioral intention. This could mean ingly, when the students feel that WBT is playful enough and easy
that participants' intention to use a computer-based listening to use, their behavioral intention to use such a system seems
comprehension test would hardly be affected by their perceptions stronger. Bearing this in mind, certain techniques adopted to

Fig. 5. Depicted model for students' behavioral intention to use WBA.


530 H. Cigdem et al. / Computers in Human Behavior 61 (2016) 522e531

improve the dimensions of perceived playfulness and perceived quizzes in a two-year college Mathematics course. Journal of Computer and
Education Research, 2(4), 51e73.
ease of use with other means would have a positive impact on the
Cigdem, H., & Topcu, A. (2015). Predictors of instructors' behavioral intention to use
systems' being accepted. Besides, some game-based or real life- learning management system: a Turkish vocational college example. Computers
oriented themes might be embedded into the items so that they in Human Behavior, 52, 22e28.
could facilitate the comprehension and thus the students could find Compeau, D. R., & Higgins, C. A. (1995). Computer self-efficacy: development of a
measure and initial test. MIS Quarterly, 19(2), 189e211.
the procedures more playful and easy to use. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance
Perceived usefulness had no direct impact on behavioral of information technology. MIS Quarterly, 13(3), 319e340.
intention to use WBT which contradicted with the TAM literature. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer
technology: a comparison of two theoretical models. Management Science, 35,
Goal expectancy was positively impacted by content of questions. 982e1003.
Furthermore, computer self-efficacy and facilitating conditions had Dermo, J. (2009). E-assessment and the student learning experience: a survey of
direct effect on perceived ease of use. Perceived usefulness was student perceptions of e-assessment. British Journal of Educational Technology,
40(2), 203e214.
positively impacted by content of test items, perceived ease of use Field, A. (2000). Discovering statistics using SPSS for windows. London, UK: Sage.
and social influence. Finally, perceived usefulness and content of Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An intro-
test items had direct impact on perceived playfulness. duction to theory and research. Reading, MA: Addison-Wesley.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with un-
To sum up, WBT is more likely to be playful when it is useful and observable variables and measurement error. Journal of Marketing Research,
the content of WBT items tends to be easier to understand. Most 18(1), 39e50.
practical implications for foreign language educators might be that Gaskin, J. (2012). Stats wiki and stats tools package. http://statwiki.kolobkreations.
com/.
the content of the audio items should be based on a problem or a
Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in
case or a real life situation. This might increase the intention to use higher education: a review of the literature. Computers & Education, 57(4),
such an application as a result of an increase in the perceived ease 2333e2351.
of use and playfulness. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis
(7th ed.). Englewood Cliffs: Prentice Hall.
Jeong, H. (2014). A comparative study of scores on computer-based tests and paper-
9. Limitations based tests. Behaviour & Information Technology, 33(4), 410e422.
Karahanna, E., & Straub, D. W. (1999). The psychological origins of perceived use-
fulness and ease of use. Information and Management, 35, 237e250.
A wide range of web-based testing systems exist across different Llamas-Nistal, M., Fern andez-Iglesias, M. J., Gonza lez-Tato, J., & Mikic-Fonte, F. A.
educational settings such as the ones that incorporate videos, (2013). Blended e-assessment: migrating classical exams to the digital world.
Computers & Education, 62, 72e87.
graphics, animations and simulations. Yet in the current study, the Locke, E. A. (1996). Motivation through conscious goal setting. Applied and Pre-
students were tested on their listening comprehension and audios ventive Psychology, 5, 117e124.
were incorporated into the assessment procedures. Listening Locke, E. A., & Latham, G. P. (1990). A theory of goal setting & task performance.
Englewood Cliffs, NJ: Prentice Hall.
comprehension is among the most challenging skills both in Lu, J., Liu, C., Yu, C., & Wang, K. (2008). Determinants of accepting wireless mobile
teaching and testing procedures; thus, it is possible that test-takers data services in China. Information & Management, 45(1), 52e64.
have difficulties in all types of listening comprehension tests Moon, J., & Kim, Y. (2001). Extending the TAM for a world-wide-web context. In-
formation and Management, 38(4), 217e230.
whether it is paper-based or computer-based. It is also acceptable Moreno, R., & Mayer, R. (2007). Interactive multimodal learning environments.
that test-takers reflect different interpretations of and attitudes Special issue on interactive learning environments: contemporary issues and
towards various types of computer-based assessment, but their trends. Educational Psychology Review, 19, 309e326.
Morris, D. (2008). Economics of scale and scope in e-learning. Teaching in Higher
perceptions on a computer-based listening comprehension test is
Education, 33(3), 331e343.
expected to be different from the ones in other subjects. Therefore, Ong, C., & Lai, J. (2006). Gender differences in perceptions and relationships among
the findings ought to be interpreted through the lenses of the dominants of e-learning acceptance. Computers in Human Behaviour, 22(5),
subject of ‘listening comprehension' and of the domain of ‘military 816e829.
Pino-Silva, J. (2008). Student perception of computerized tests. ELT Journal, 62(2).
students' having no powerful targets related to learning English. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York: Free Press.
The study on the whole limited to the research context, which is Shee, D. Y., & Wang, Y.-S. (2008). Multi-criteria evaluation of the web-based e-
a two-year post-secondary military vocational college. Broadening learning system: a methodology based on learner satisfaction and its applica-
tions. Computers & Education, 50(3), 894e905.
the scope with other post-secondary institutions across various Shih, H. (2008). Using a cognitive-motivation-control view to assess the adoption
disciplines would add more value to similar studies. Gender issue intention for web-based learning. Computers & Education, 50(1), 327e337.
was another limitations as the male military students outnumbered Smith, P. J., Murphy, K. L., & Mahoney, S. E. (2003). Towards identifying factors
underlying readiness for online learning: an exploratory study. Distance Edu-
the females. A future study could consider eliminating the imbal- cation, 24(1), 57e67.
ance of gender by including more female students. Sorensen, E. (2013). Implementation and student perceptions of e-assessment in a
chemical engineering module. European Journal of Engineering Education, 38(2),
172e185.
References Sun, P., Tsai, R. J., Finger, G., Chen, Y., & Yeh, D. (2008). What drives a successful e-
learning? an empirical investigation of the critical factors influencing learner
Abedi, J. (2014). The use of computer technology in designing appropriate test ac- satisfaction. Computers & Education, 50, 1183e1202.
commodations for English language learners. Applied Measurement in Education, Taylor, S., & Todd, P. (1995). Assessing it usage: the role of prior experience. MIS
27(4), 261e272. Quarterly, 19(4), 561e570.
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Tennyson, R. D. (1980). Instructional control strategies and content structure as
Decision Processes, 50(2), 179e211. design variables in concept acquisition using computer-based instruction.
Ajzen, I., & Fishein, M. (1980). Understanding attitudes and predicting social behav- Journal of Educational Psychology, 72, 525e532.
iour. Englewood Cliffs, NJ: Prentice-Hall. Tennyson, R. D., & Buttrey, T. (1980). Advisement and management strategies as
Bueno, S., & Salmeron, J. L. (2008). TAM-based success modeling in ERP. Interacting design variables in computer-assisted instruction. Educational Communication
with Computers, 20(6), 515e523. and Technology Journal, 28, 169e176.
Bull, J., & McKenna, C. (2004). Blueprint for computer-assisted assessment. London: Tennyson, R. D., & Park, D. C. (1980). The teaching of concepts: a review of
Routledge-Falmer. instructional design literature. Review of Educational Research, 50, 55e70.
Chou, C., Moslehpour, M., & Le Huyen, N. T. (2014). Concurrent and predictive Teo, T. (2009). Modelling technology acceptance in education: a study of pre-service
validity of computer-adaptive freshman English test for college freshman En- teachers. Computers & Education, 52(1), 302e312.
glish in Taiwan. International Journal of English Language Education, 2(1), Teo, T., Lee, C. B., & Chai, C. S. (2008). Understanding pre-service teachers' computer
143e156. attitudes: applying and extending the technology acceptance model. Journal of
Cigdem, H., & Oncu, S. (2015). E-assessment adaptation at a military vocational Computer Assisted Learning, 24(2), 128e143.
college: student perceptions. Eurasia Journal of Mathematics, Science & Tech- Terzis, V., & Economides, A. A. (2011). The acceptance and use of computer based
nology Education, 11(5), 971e988. assessment. Computers & Education, 56(4), 1032e1044.
Cigdem, H., & Tan, S. (2014). Students' opinions on administering optional online Terzis, V., Moridis, C. N., & Economides, A. A. (2012). How Student's personality
H. Cigdem et al. / Computers in Human Behavior 61 (2016) 522e531 531

traits affect computer based assessment acceptance: integrating BFI with 186e204.
CBAAM. Computers in Human Behavior, 28(5), 1985e1996. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of
Terzis, V., Moridis, C. N., & Economides, A. A. (2013). Continuance acceptance of information technology: toward a unified view. MIS Quarterly, 27(3), 425e478.
computer based assessment through the integration of user's expectations and Wang, Y. (2003). Assessment of learner satisfaction with asynchronous electronic
perceptions. Computers & Education, 62, 50e61. learning systems. Information and Management, 41(1), 75e86.
Thompson, R., Higgins, C., & Howell, J. (1991). Personal computing: toward a con- Wang, Y.-S., Wu, M.-C., & Wang, H.-Y. (2009). Investigating the determinants and
ceptual model of utilization. MIS Quarterly, 15(1), 124e143. age and gender differences in the acceptance of mobile learning. British Journal
Van Raaij, E. M., & Schepers, J. J. L. (2008). The acceptance and use of a virtual of Educational Technology, 40(1), 92e118.
learning environment in China. Computers & Education, 50(3), 838e852. Yi, M. Y., & Hwang, Y. (2003). Predicting the use of web-based information systems:
Venkatesh, V. (1999). Creation of favorable user perceptions: exploring the role of self-efficacy, enjoyment, learning goal orientation, and the technology adoption
intrinsic motivation. MIS Quarterly, 23, 239e260. model. International Journal of Human Computer Studies, 59(4), 431e449.
Venkatesh, V., & Davis, F. D. (1996). A model of the antecedents of perceived ease of Zakrzewski, S., & Steven, C. (2000). A model for computer-based assessment: the
use: development and test. Decision Sciences, 27, 451e481. catherine wheel principle. Assessment & Evaluation in Higher Education, 25(2),
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology 201e215.
acceptance model: four longitudinal field studies. Management Science, 46,

You might also like