You are on page 1of 17

Learning Environments Research (2020) 23:251–267

https://doi.org/10.1007/s10984-019-09299-6

ORIGINAL PAPER

Perceived service quality factors in online higher education

Daniel La Rotta1   · Olga Cecilia Usuga2 · Valentina Clavijo1

Received: 13 July 2018 / Accepted: 22 October 2019 / Published online: 30 October 2019
© Springer Nature B.V. 2019

Abstract
In recent years, the offer of higher-education programs under online modality has grown
significantly. This situation has generated the need to evaluate the quality of these services,
whose conditions and characteristics do not correspond to those of the traditional face-
to-face modality. This article identifies and describes factors underlying the quality of the
service perceived by students enrolled in an online higher-education program. Based on a
literature review and a field work with students, an information-gathering instrument was
developed using a qualitative phase. Next, field work involved collecting data from 120
students enrolled at a public university in Colombia (South America). Data were analysed
through exploratory factor analysis using unweighted least squares extraction and Varimax
rotation. Five factors explaining 60.335% of the total variance were identified: (1) Teach-
ers, (2) Support academic resources, (3) Administrative support, (4) User interface and (5)
Course enrollment. Further research is recommended to understand possible specificities in
other online settings.

Keywords  e-learning · Exploratory factor analysis · Online higher education · Perceived


service quality · Service quality factors

Introduction

In recent years, Information and Communication Technologies and specifically the Internet
have contributed to better positioning for offering higher education (HE) services under an
online modality (Adams et al. 2017). In fact, online HE is being considered as an important
alternative that could help to address access problems that still exist in some regions (Ally
and Prieto 2014; Islam et al. 2015).
Latin America has not been immune to this situation, with increasing HE access
being declared as a main goal in the government agendas of many of its countries
(Arizabaleta and Ochoa 2016). Thus, despite the fact that in the last decades HE

* Daniel La Rotta
daniel.larotta@udea.edu.co
1
Ingeniería y Sociedad Research Group, Industrial Engineering Department, Universidad de
Antioquia, Calle 67 N° 53‑108, Medellín, Colombia
2
INCAS Research Group, Industrial Engineering Department, Universidad de Antioquia, Calle
67 N° 53‑108, Medellín, Colombia

13
Vol.:(0123456789)
252 Learning Environments Research (2020) 23:251–267

coverage has reached an average of 37.7% (Melo et al. 2017), this level is still accompa-
nied by high dropout rates (Ferreyra et al. 2017), which not only affect socioeconomic
development, but also compromise the efficient use of resources.
In this sense, growth in the use of technological tools could turn online education
(OE) into a possible alternative for coping with the above problems. During recent
years, many Latin American countries have made investments in technologies and con-
nectivity a priority in their development plans (Chiroleu 2012; MENa 2013). These
countries have also boosted the offering of online academic programs, not only to
expand coverage, but also to promote training program diversity (Rama 2012).
OE is also a reality in Colombia. According to the Ministry of Education, there were
12.000 students enrolled in online HE in 2010. By 2015, this number exceeded 65.000
and, in 2017, it was around 80.000 (SNIES 2017). Although OE has helped Colombians
gain access to HE services, these figures, as with the face-to-face scenario, have been
accompanied by high dropout levels (Estévez et  al. 2015). This situation is similar in
the global context, with online settings having higher dropout levels compared with the
face-to-face modality (Lee et al. 2013).
Therefore, because of the importance that online education currently occupies in our
society, its growing popularity, and the complexity and specificity of its processes, it
is necessary to generate more evaluations of the quality of the service being provided
(Rodríguez et al. 2014; Martínez et al. 2013).
In this sense, an important trend in studies associated with quality evaluation has
assigned a salient place to the perceptions that clients have of their service experience
(Lewis and Booms 1983; Parasuraman et al. 1988). For this reason, many investigations
identified factors that, from the client’s point of view, explain the level of perceived
quality of the service received. In this way, by identifying those factors, it is also hoped
to identify those variables that are key to the client, thus allowing the service quality to
improve.
In general, many researchers accept the benefits of the factors proposed in the SERV-
QUAL model (Parasuraman et  al. 1988), considering that they can be used across all
types of services. Other authors, however, have been critical of this position, point-
ing out that these dimensions, as well as their composition, might not be generic for
all sectors and clients; therefore, they advocate their adaptation to the peculiarities of
each service (Carman 1990; Babakus and Boller 1992; van Dyke et  al. 1997; Ladhari
2008; Kashif et  al. 2014). For this reason, some researchers have set themselves the
task of developing their own models which identify specific factors and variables that
allow evaluation of quality from the point of view of the client of that particular service.
Thus, in the case of HE under face-to-face methodologies, the work of Abdullah (2006),
Annamdevula and Bellamkonda (2012), Icli and Anil (2014), Teeroovengadum et  al.
(2016) and Latif et al. (2017) stand out.
With regard to the identification of factors that respond to assessments by HE stu-
dents enrolled exclusively under an online modality, evidence in the literature is scarce.
Only three studies were found, with two of these from the same university in Spain.
First, Martinez et al. (2010) identified the main dimensions that contribute to students’
perception of service quality using qualitative methods. Three years later, Martínez
et  al. (2013) complemented this work, using quantitative approaches, by proposing a
four-factor model to evaluate service quality as perceived by students. The third study
by Udo et al. (2011) involved student responses at a university in the United States. This
work proposed a modified SERVQUAL instrument, which includes five factors.

13
Learning Environments Research (2020) 23:251–267 253

Thus, not only are the three studies involving purely online environments (not
blended) scarce, but they involved students from two universities located in different
and diverse contexts (US and Spain). Also they involved education system in two eco-
nomically-developed countries, which do not necessarily represent the needs of students
in Latin America.
Therefore, the aim of this study is to identify and describe the factors that underlie the
service quality perceived by students enrolled in an online HE program in a Latin Ameri-
can context. This way, it is expected to identify possible elements that contribute to raising
the quality levels perceived by students. Likewise, by directly including students’ assess-
ments, it is hoped that, as a possible consequence, the high dropout rates existing in HE
also could be positively impacted.

Literature review

Service quality

Service quality has been the subject of many reflections over the years. Its diffuse nature
(Parasuraman et al. 1985; Camisón et al. 2007) has generated different approaches for clar-
ifying its meaning, with most giving prominence to the perceptions that customers have
of the service that they receive. Thus, renowned authors such as Lewis and Booms (1983)
define service quality as “a measure of how well the service level delivered matches the
customer’s expectations”. Grönroos (1984), on the other hand, refers to service quality as
“the outcome of an evaluation process, where the consumer compares his expectations
with the service he perceives he has received”. Parasuraman et al. (1988) define it by say-
ing that service quality is “the consumer’s judgment about an entity’s overall excellence
superiority”.
In addition to its conceptualisation, the operationalisation of service quality has allowed
us not only to better understand it, but also to manage it. The SERVQUAL model, devel-
oped by Parasuraman et al. (1988), is one of the most-recognised frameworks. Its authors
consider that it can be used for a wide range of services, because it includes a ‘basic skel-
eton’ that can be adjusted to specific needs. SERVQUAL can be used to evaluate service
quality in terms of the difference between the expectations of the client and perceptions of
the service received. The instrument consists of 22 items with five dimensions: Tangibles,
Reliability, Responsiveness, Assurance and Empathy.
Despite its wide acceptance, it has also been criticised (Carman 1990; Babakus and
Boller 1992; Cronin and Taylor 1992; van Dyke et al. 1997; Ladhari 2008, 2009; Kashif
et al. 2014). Carman (1990) suggests that the above dimensions are not generic for all sec-
tors and recommends adapting the subject and wording of the items to the context of each
service. Similarly, Babakus and Boller (1992) states that the dimensionality of service
quality could depend on the type of service under study. On the other hand, Cronin and
Taylor (1992) argue that an operationalisation based on the difference between expectations
and performance is not convenient. Therefore, they propose a model called SERVPERF
which, despite including the same items and dimensions of SERVQUAL, is based exclu-
sively on performance evaluation. On the other hand Teas (1993), Ladhari (2008) and Lad-
hari (2009) state that the concept of expectations is loosely defined and open to multiple

13
254 Learning Environments Research (2020) 23:251–267

interpretations. Moreover, Ladhari (2008) argues that SERVQUAL focuses on the process
of service delivery rather than the outcomes of service encounters.
In spite of the wide popularity of SERVQUAL and SERVPERF as generic frameworks,
many authors have developed specific instruments to collect the possible peculiarities of
each type of service and client in a more-detailed way (Babakus and Boller 1992; Seth
et al. 2005; Martínez and Martínez 2007; Latif et al. 2017).

Service quality models in face‑to‑face HE

In the specific case of face-to-face HE settings, different authors have proposed models
for evaluating service quality. Whereas some have developed adapted versions of SERV-
QUAL (Đonlagić, Sabina and Samira 2015; Galeeva 2016), others have created original
instruments.
For example, Abdullah (2006) drew on the responses of undergraduate students from
Malaysia to propose an instrument called HEdPERF, which has six factors: Non-Aca-
demic Aspects, Academic Aspects, Reputation, Access, Program Issues, and Understand-
ing. The reliability of these scales was satisfactory using Cronbach’s alpha and split-half
coefficients. Also validity was checked using face, content, convergent, discriminant and
criterion validities. Torres and Araya (2010) developed a model (U-Cals), based on the
perceptions of undergraduate students of Chilean universities, with six dimensions: Teach-
er’s Attitude and Behaviour, Teacher’s Competencies, Curriculum, Administrative Staff,
Facilities and Course Organisation. To determine the reliability of the subscales the authors
evaluate the Cronbach’s Alpha and the Composite Reliability Coefficients. To measure the
validity of the scale, they check for content, construct and criterion validity. On the other
hand, Annamdevula and Bellamkonda (2012) drew on data collected from undergraduate
students from India in proposing HiEdQUAL, which includes five factors: Teaching and
Course Content, Administrative Services, Academic Facilities, Campus Infraestructure and
Support Services. Cronbach’s alpha coefficient confirmed the reliability of the instrument,
the study also checked face, content, convergent, discriminant and nomological validity.
Icli and Anil (2014) developed a scale for Master’s degree programs (MBA). The HED-
QUAL was developed based on the perceptions of students of Turkish universities  iden-
tifying five factors: Academic Quality, Administrative Service Quality, Library Services
Quality, Quality of Providing Career Opportunities and Supportive Services Quality. To
check the reliability of each dimension, Cronbach’s alpha and composite reliability coeffi-
cients were calculated. For validating the five constructs, the authors checked face, content,
convergent and discriminant validities. Finally, Latif et al. (2017) proposed a scale called
HiEduQual that is based on students from Pakistan and has six dimensions: Teacher Qual-
ity, Administrative Services, Knowledge Services, Activities, Continuous Improvement
and Leadership Quality. The authors reported the composite reliability coefficient and pro-
vide evidence for discriminant and convergent validity.

Service‑quality models in online higher education

Regarding the development of HE models in online settings, little was found in the litera-
ture. In one of the studies, Selim (2007) determined the critical success factors in online
learning from the student’s perspective. The author based conclusions on the responses of
students in courses that combine e-learning tools with traditional learning tools. The study
was conducted in the United Arab Emirates and involved eight dimensions: Instructor’s

13
Learning Environments Research (2020) 23:251–267 255

Attitude Towards and Control of Technology, Instructor’s Teaching Style, Student Moti-
vation and Technical Competency, Student Interactive Collaboration, E-learning Course
Content and Structure, Ease of On-campus Internet Access, Effectiveness of Information
Technology Infrastructure and University Support of e-learning Activities. The author
checked the validity of each factor through confirmatory factor analysis. Udo et al. (2011)
proposed a modified version of SERVQUAL. The study involved undergraduate and gradu-
ate students enrolled for credit in an e-learning class within the previous 6  months at a
public University in the United States. The instrument has five factors: Assurance, Empa-
thy, Responsiveness, Reliability and Website Content. Reliability was checked using Cron-
bach’s alpha and composite reliability coefficients, whereas construct validity was assessed
using covariance analysis of linear structural equations. On the other hand, Martínez et al.
(2013) developed a scale based on, first, the E-S-QUAL model which measures service
quality delivered by Web sites on which customers shop online (Parasuraman et al. 2005)
and, second, the generation of items captured directly from students’ voice using the Criti-
cal Incident Technique. Their model consists of 24 questions distributed among the four
dimensions of Core Business (teaching), Facilitative or Administrative Services, Support
Services and User Interface. Alpha Cronbach and composite reliability were tested, as well
as content, convergent, discriminant, nomological and predictive validities. The study was
developed with undergraduate students of a Spanish university. Finally, Uppal et al. (2017)
proposed another extension of SERVQUAL based on perceptions of undergraduate and
postgraduate students of public universities in Pakistan. The model involves three dimen-
sions: (1) Service dimension, (2) Information dimension, and (3) System dimension. The
authors evaluated reliability alpha and composite reliabilities and also checked convergent
and discriminant validities.

Methodology

Qualitative phase

Literature review

The study began with a literature review aimed at locating relevant studies regarding ser-
vice-quality evaluation from the client’s point of view. Generic models were identified
(Parasuraman et al. 1988; Cronin and Taylor 1992; Grönroos 1984) together with specific
models for particular economic sectors (Ladhari 2008; Pérez and Giraldo 2014). Studies
focused on the face-to-face HE sector (Abdullah 2006; Icli and Anil 2014; Annamdevula
and Bellamkonda 2012; Torres and Araya 2010) were reviewed. Finally, Service-quality
models for HE online settings were identified (Martínez et  al. 0.2013; Udo et  al. 2011,
Selim 2007). At the end of this phase, 10 models were selected as possible bases (dimen-
sions, scales, items, etc.) for the development of the information-gathering instrument.

Information collection instrument

The creation of the instrument was based on an inventory of  all the questions included
in the 10 models chosen. Also, in order to identify additional aspects unforeseen in these

13
256 Learning Environments Research (2020) 23:251–267

studies, the systematic literature review on service quality evaluation carried out by Pérez
and Giraldo (2014) was taken into account.
The research team then grouped the questions into four categories according to their
similarities and eliminated possible redundancies. Moreover, with the aim of further
enriching the possible themes included by assessing the student’s point of view, the Criti-
cal Incident Technique (Hayes 2002) was used. This method involves direct fieldwork with
users (students) to identify aspects considered crucial to the service provided. Thus, 30
surveys each requested the identification of six nuclear aspects.
The next step involved assessing whether each of the responses obtained had already
has been identified or whether it should be added as new.
The research team then drafted each question uniformly and precisely. A general state-
ment and a response scale were also defined, namely, a 5-point Likert scale ranging from
Strongly Agree (5) to Strongly Disagree (1). Likewise, it was defined that the service-qual-
ity construct would be measured using a perceptions-only approach, including process and
output items, as suggested by Ladhari (2008).
Finally, a validation instrument was designed and tested on four students and three
program directors. The final instrument was made up of 36 questions distributed in four
dimensions.

Quantitative phase

Data collection

The study involved a Telecommunications Engineering department at a public university


in Colombia (University of Antioquia). The online program has been offered since 2005
to students who mostly combine their learning process at the university with particular
work activities. The majority of students are over 26  years old (61%). The total student
population during the period when the fieldwork was conducted (2017–2018) was 136. The
instrument was made available on the network through Google Tools. After 6  weeks, a
sample of 120 anonymous observations had been obtained, which was considered suffi-
cient to continue with the study (See Table 1).

Table 1  Sample distribution Variable Category Percentages

Gender Male 79
Female 21
Academic level 1 38
2 14
3 8
4 6
5 5
6 13
7 1
8 5
9 5
10 5

13
Learning Environments Research (2020) 23:251–267 257

Exploratory Factor Analysis (EFA)

Data analysis began with central tendency, dispersion and distribution measures. Next,
Pearson and partial correlations, as well as sample adequacy measures of each variable,
were calculated. Kaiser, Meyer, Olkin (KMO) and Bartlett´s sphericity tests were also per-
formed to confirm the feasibility of using EFA with the collected data.
Skewness and kurtosis analyses showed non-compliance with the normality assump-
tion for some variables. Therefore, it was decided to extract factors through the unweighted
least squares method (ULS), because it makes no distributional assumptions regarding the
observed variables (Flora et al. 2012; MacCallum 2009).

Results

Deciding the number of factors to retain was guided by the Kaiser criterion and the scree
test, as well as the conceptual interpretability of the identified factors (Hair et  al. 2010;
Worthington and Whittaker 2006). From run 1, a clearer arrangement was evidenced using
Varimax rotation.
Each variable’s contribution to the factor solution was first analysed. Those with factor
loadings below 0.5 were identified. These variables were evaluated for their communalities
and conceptual interpretability, and one was eliminated (generally the one with the lowest
communality) before re-running the EFA. This way four items were taken out. Likewise,
those variables with cross loadings greater than 0.4 with one or more factors were consid-
ered problematic. Using this approach, two more questions were taken out. Finally, eight
additional items were eliminated because their communalities were less than 0.5, suggest-
ing that those variables were not adequately accounted for by the factor solution (Hair et al.
2010). Thus, five factors including 22 variables were identified, with a total explained vari-
ance of 60.335% (Table 2).
Regarding the reliability of the extracted factors, alpha coefficients were higher than 0.7
in all cases. Item-to-total correlations had values greater than 0.5 (Table 3) and inter-item
correlations of each factor exceeded 0.3. Therefore, the three tests supported the factors’
reliabilities (Hair et al. 2010). In order to check unidimensionality, EFA was run for each
of the factors. Extraction yielded a single factor and all factor loadings were greater than
0.622, supporting this assumption (Hair et al. 2010) (Table 3).
Finally, despite the exploratory nature of this work, it is important to mention two
aspects regarding the possible  validity of the factors. First, the instrument’s construction
process, which included an exhaustive literature review, fieldwork for generating items
directly from students’ voice (Critical Incident Technique) and a qualitative assessment
made by students and program managers, contributed to its content validity. Second, con-
cerning its possible criterion validity, the original questionnaire included an additional item
for evaluating students’ general satisfaction with their academic program (PS1). Thus it
was decided to calculate the correlation between the average of the 22 questions that made
up the perceived service quality factors and the answers to question PS1. The obtained cor-
relation of 0.68 possibly supported criterion validity. These two reflections do not replace
the need for future analysis under a confirmatory perspective using a different sample (see
Limitations section).

13
Table 2  Exploratory factor analysis results for rotated factor matrix
258

Factor Item Loadings


1 2 3 4 5

13
Teachers Q31 Teachers care about student learning 0.786
Q30 Teachers manage to develop student interest in the subject 0.731 0.323
Q26 Teachers transmit their knowledge clearly 0.701 0.392
Q36 Teachers respond promptly to questions that students may have outside of class 0.684 0.310
Q32 Teachers know the content of courses related to the one they teach and establish relationships 0.607
between them
Support academic resources Q4 The academic program prepares students adequately to perform in the labor market 0.308 0.738
Q9 The curriculum has enough elective subjects that allow students to further explore their topics 0.640 0.317
of interest
Q3 The academic program has enough teacher´s assistants to support the teaching–learning 0.626
process
Q2 The academic program has enough teachers to support the teaching–learning process 0.312 0.596
Q19 The academic program has enough books, magazines, databases and other materials, to sup- 0.524 0.329
port the learning process
Administrative support Q12 The academic program has enough administrative support staff to respond to requests. 0.804
Q14 The program support staff provides answers to administrative requests within the promised 0.736
deadline
Q18 The administrative staff is willing to help when required 0.683
Q13 The academic program has systems in place which allow performing academic-administrative 0.662 0.357
processes or solve problems in a non-face-to-face manner (by telephone, email, among
others)
Learning Environments Research (2020) 23:251–267
Table 2  (continued)
Factor Item Loadings
1 2 3 4 5

User interface Q21 The program has an academic platform with various tools that facilitate lesson development 0.735
(e.g.: student–teacher interaction, screen sharing, video playback, among others)
Q22 The program has an academic platform with various tools that facilitate the learning process 0.668
outside the classroom (e.g.: student–teacher forums, self-assessments, recorded lessons,
among others)
Q25 The program makes a stable platform available to allow the continuity of academic activities 0.647 0.313
Q20 The academic program has staff that attends to possible technical failures of the platform in a 0.618
timely manner (Moodle, WizIQ)
Course enrollment Q7 The program offers enough spots at the time of course enrollment 0.320 0.658
Learning Environments Research (2020) 23:251–267

Q6 Course registration process at the beginning of each semester is easy to perform 0.639
Q10 The program allows for the enrollment of an adequate number of courses per semester, 0.574
according to students’ needs
Q8 The process of canceling courses is easy to perform through the semester 0.558
Extraction Method: Unweighted Least Squares. Rotation Method: Varimax with Kaiser normalization
Table shows factor loadings greater than 0.3
KMO 0.8767
Bartlett´s sphericity test 1532.7
Aprox. Chi squared 231
Gl
0.000
Sig.
Eigenvalue 3.1484 2.7634 2.727 2.4967 2.1384
VTE 14.311 12.561 12.395 11.349 9.720
VTE Acum 14.311 26.871 39.267 50.615 60.335
259

13
260 Learning Environments Research (2020) 23:251–267

Table 3  Validity and reliability tests


Factor Item Factor loading AVE (%) Alpha Alpha if Corrected item-
item elimi- total correlation
nated

Teachers Q31 0.866 62.5 0.891 0.852 0.801


Q30 0.864 0.852 0.799
Q26 0.763 0.872 0.715
Q36 0.735 0.878 0.688
Q32 0.711 0.882 0.669
Support academic resources Q4 0.794 53.5 0.849 0.806 0.711
Q9 0.750 0.820 0.653
Q3 0.724 0.826 0.637
Q2 0.701 0.813 0.678
Q19 0.685 0.827 0.626
Administrative support Q12 0.880 64.526 0.876 0.816 0.797
Q14 0.808 0.838 0.743
Q18 0.774 0.859 0.688
Q13 0.744 0.850 0.716
User Interface Q21 0.871 58.121 0.842 0.763 0.763
Q22 0.749 0.813 0.647
Q25 0.718 0.805 0.674
Q20 0.700 0.817 0.636
Course enrollment Q7 0.703 45.129 0.758 0.695 0.591
Q6 0.668 0.694 0.569
Q10 0.690 0.691 0.579
Q8 0.622 0.723 0.525

Discussion

The objectives of this study were to identify and describe the factors that underlie the ser-
vice quality perceived by students in an online HE program. The resulting factors and their
possible interpretation in relation to other research must take into account the diversity of
the contexts where they were developed. In particular, past studies had specific charac-
teristics such as: (1) universities’ countries of origin (cultures of different countries such
as United Arab Emirates, India, Pakistan, Turkey, Mauritius, Chile, United States, Malay-
sia and Spain), (2) types of students (undergraduate, graduate, both), (3) training mode
(online, face-to-face, blended) and (4) instruments’ structure (SERVQUAL, SERVQUAL
adaptation, structure development).
Our study involved 120 undergraduate Engineering students at a Colombian public uni-
versity studying in online environments. Also this study did not assess the distance educa-
tion learning environment, as others have done (Walker and Fraser 2005; Fernandez et al.
2015), but identified the specific factors that underlie the quality perceived by students.
Thus, despite the diversity indicated, the results generally are in line with the factors
found in other studies. The five factors and the 22 variables identified are described below.

13
Learning Environments Research (2020) 23:251–267 261

Teachers

This factor includes teachers’ concern for students’ learning, ability to awaken student’s
interest, clear transmission of knowledge, timely response to questions, and establishment
of thematic relationships with other courses.
Our results coincide, in general terms, with those obtained in the studies in face-to-face
contexts (Abdullah 2006; Annamdevula and Bellamkonda 2012; Icli and Anil 2014; Latif
et al. 2017). All of these past studies involved characteristics associated with teacher com-
petences and attitudes in a single factor. In particular, of these studies, only Icli and Anil
(2014) involved Master’s students. Other studies, such as that of Torres and Araya (2010),
identified two, not one, dimensions associated with the teacher’s role (Competences and
Attitude and Behaviour). Finally, Teeroovengadum et al. (2016), through their hierarchical
model, identified these same two factors as sub-dimensions of another dimension called
Core Educational Quality.
Regarding studies carried out in online settings, the situation is similar. For studies
involving structures based on SERVQUAL (Udo et  al. 2011; Uppal et  al. 2017), teacher
characteristics appeared to be saliently distributed among dimensions such as Empathy,
Responsiveness, Assurance and Reliability. Both of these studies include undergraduate
and graduate students.
On the other hand, regarding structures developed for online environments which do not
use SERVQUAL as a base, Martínez et al. (2013) presented a dimension for the teacher’s
attitudes and competences. Finally, Selim (2007) highlighted one factor associated with
teachers’ attitude and another factor associated with their performance in the use of virtual
media.
Under these conditions, when evaluating the quality being perceived, these studies sup-
port the importance of the teacher’s academic and pedagogical qualities, independent of
setting, level of training and specific students’ context. This dimension has appeared con-
sistently as an important component when assessing distance learning environments. Addi-
tionally, it seems to favour other distance learning outcomes such as student satisfaction
(Fernandez et al. 2015; Walker and Fraser 2005) and students motivation in using online
discussion in their learning process (Ali et al. 2016).

User interface

This factor is based on four variables: availability of tools within the online academic plat-
form that facilitate lesson development (e.g. student–teacher interaction, screen sharing,
video playback), availability of tools within the platform that facilitate the learning process
outside the classroom (e.g. student–teacher forum, self-evaluations, recorded lessons), sta-
bility and availability of the virtual platform, and availability of personnel to address its
possible failures.
This dimension, as anticipated, is not part of the structure of the models developed for
face-to-face environments. On the contrary, in the four studies with students in online set-
tings, a dimension associated with the service interaction platform always appears. Udo
et  al. (2011) include a factor called Website Content, which highlights aspects such as
audio quality, animations and multimedia. Likewise, Uppal et al. (2017) present a similarly-
named factor (Course Website) which involves aspects such as information relevance, ease
of use and updating. Martínez et  al. (2013) propose two factors: User Interface includes
navigation speed, connectivity, robustness and navigability; Support Services includes

13
262 Learning Environments Research (2020) 23:251–267

synchronic and interaction activities. Finally, Selim (2007) proposes two sub-dimensions
within a factor called Technology: the first factor includes easy access, browsing speed and
ease of use; and the second factor includes classmate interaction, instructor contact and
infrastructural technology efficiency.
As observed, the results of the four studies are in line with our findings. Similar to ours,
two studies highlight aspects associated not only with the platform infrastructure robust-
ness (speed, connectivity, troubleshooting, etc.), but also with its interaction activities. On
the other hand, the other two studies focus their platform indicators on characteristics other
than the infrastructure quality of the network that supports the service. This last situation
must be interpreted with caution, because it does not imply that this aspect is unimportant
for the quality perceived by students. By contrast, this could be explained in terms of spe-
cific aspects of the context in which these studies were conducted: (e.g. universities with an
excellent network service so that students do not have to ‘worry’ about this variable).
With respect to the interaction tools that support synchronous activities, the studies of
Udo et al. (2011), Selim (2007) and Martínez et al. (2013) coincide with ours by highlight-
ing its relevance. None of the studies highlights such asynchronous tools as self-evalua-
tions or recorded lessons playback, among others.
Finally, this dimension is relevant for its possible relation between teachers and learn-
ers and the possibilities that it offers to the interaction and collaboration between students,
which are fundamental for satisfaction (Fernandez et al. 2015; Walker and Fraser 2005).

Support academic resources

This factor highlights resources that complement the training process, particularly avail-
ability of teacher assistants and professors, books and support material, elective subjects to
further explore topics of interest, and an adequate level of training to perform in the labour
market.
When comparing our results with those of the other studies, the conclusions are not
as uniform as for the above factors. In the case of face-to-face studies, it is not surprising
that a dimension associated with physical resources supports the academic process. In the
case of studies in online settings, the situation is different. While those studies that used
SERVQUAL as a base structure tended to preserve the Tangible dimension, its variables
correspond to the user interface characteristics mentioned in the previous factor. On the
other hand, in the two studies on virtual settings that did not use SERVQUAL as the base
structure, there is no mention of a dimension associated with physical installations (Mar-
tínez et al. 2013; Selim 2007).
None of the past studies in online settings highlighted the importance of academic
resources related to availability of professors, teacher assistants, elective subjects or bib-
liographic support material. This could be a consequence of possible budgetary limitations
of public universities in Columbia relative to other economies. In this regard, it is impor-
tant to note that, in two of the face-to-face studies, similar features were presented under a
dimension called curricular aspects (Torres and Araya 2010; Teeroovengadum et al. 2016).
Finally, when comparing the composition of this factor with the factors consistently
found for a distance learning environment, two of our four resulting items (Items 4 and
9) are similar to the items in the so-called Personal Relevance dimension, which has been
found to be related to satisfaction (Fernandez et al. 2015; Walker and Fraser 2005).

13
Learning Environments Research (2020) 23:251–267 263

Administrative support

This factor involves the timely response to administrative requests, the availability of per-
sonnel to do so, the support staff’s attitude in addressing concerns, and the existence of
systems that facilitate the resolution of problems in a non-face-to-face manner. This fac-
tor was found in all studies conducted for face-to-face services, where administrative sup-
port processes are fundamental (Abdullah 2006; Torres and Araya 2010; Annamdevula and
Bellamkonda 2012; Icli and Anil 2014; Teeroovengadum et al. 2016; Latif et al. 2017).
On the other hand, those studies in online environments that used SERVQUAL as the
basis, did not have specific factors associated with administrative services (Udo et al. 2011;
Uppal et al. 2017). Among those studies without SERVQUAL as their main structure, the
situation is contradictory. Thus, Martínez et al. (2013) identify a dimension called Facilita-
tive or Administrative Services, whereas Selim (2007) does not.
In conclusion, all the face-to-face studies highlight the importance of administrative
support for the quality of educational service. In contrast, in online settings, the situation
is not so clear, with two of the five studies (Spain and Colombia) showing this as impor-
tant and the other three studies not identifying this as remarkable (United States, Pakistan,
United Arab Emirates). This situation invites further exploration of the possible relevance
of this dimension to online settings in order to understand the reasons why it appears in
some contexts but not in others (cultural diversity, administrative process complexity, clar-
ity and effectiveness, etc.).

Course enrollment

This factor involves four variables: availability of enrollment spots, ease of enrollment,
adequate relation of courses offered to particular training needs, and ease of course can-
cellation during the semester. Regarding the studies in face-to-face settings, none of these
highlighted a factor associated with enrollment processes as an independent dimension.
Only Latif et al. (2017) included two similar items in a dimension called Administrative
Services. In the studies in online environments, only Martínez et al. (2013) highlighted a
similar item under their Support Services factor.
These results suggest that the presence of this dimension in our study arose from the
context of our students belonging to a Colombian public university where administrative
processes can be complex or unclear. Thus, our students seem to focus on course enroll-
ment processes, on which their academic experience during the semester largely depends.

Conclusions

The aim of this study was to identify and describe the factors that underlie the service
quality perceived by students in a specific online HE program. The resulting factors, and
especially their comparison with other research, allow us to identify the presence of: (1)
Common characteristics across all studies (face-to-face and online), (2) characteristics that
dominate in online settings and within these (3), characteristics that respond to the specific
contexts where they were developed.
Thus, the relevance of the teacher’s role to student perceptions seems to highlighted
regardless of the setting (face-to-face or online), country of origin, type of students (under-
graduate, graduate) or measurement instrument (SERVQUAL, SERVQUAL adaptation,

13
264 Learning Environments Research (2020) 23:251–267

own structure development). In particular, all of the studies highlight the importance of
teacher competences and attitudes towards students. In a world in which technologies tend
to replace certain professional roles, this situation seems to support the fundamental con-
tributions that teachers make to their students’ learning processes, either face-to-face or
virtual.
The above comparison also allows identification of the generic relevance of the
resources associated with the training process. In this case, these factors seem to respond
to specific aspects of the training mode (face-to-face vs. online) and, in some cases, to
possible needs or specific shortcomings of the context in which the study is conducted. In
particular, in all the studies in online environments, the interaction platform had an impor-
tant role. On the other hand, the possible importance of other support resources was not so
uniform in the studies analysed, suggesting the need for new investigations to identify why
these seem relevant in some cases but not in others (e.g. teacher assistants, bibliographic
material, elective courses).
Likewise, it is important to point out the generic relevance of administrative support
processes to students in face-to-face environments. In this case, the need to have clear
and robust processes, along with suitable administrative staff attitudes and competencies,
is highlighted. In the case of virtual settings, the situation does not seem to be so clear,
because administrative support stands out as a dimension in only two of the five studies
analysed (Spain and Colombia), which again invites new investigations to clarify the mat-
ter. In the Colombian case, an independent factor appeared for a specific administrative
process (course enrollment), which could respond to context peculiarities still unknown to
the researchers (student needs, cultural aspects, regulations, process clarity, etc.).
Finally, despite the fact that the objective of our research was not to propose a new scale
for quality assessment, some aspects of our information-collection instrument need to be
highlighted. As mentioned, one reasons for creating our own tool was to include aspects of
our particular context. Thus, as far as we know, the exploratory instrument created is the
only one responding to a latino-american, online-exclusive, higher education context.
In terms of the topics being assessed, it is also the only study that included aspects such
as asynchronous tools (self-evaluations, recorded lessons playback, etc.) and availability
of professors, teacher assistants and elective subjects. It is also the only study to include
assessing topics such as teachers abilities to establish relations with other courses and
aspects related to course enrollment processes.
It is recognised that the stability and replicability of the factors found through AF are
higher when the sample is large because there is less chance that correlations arise by
chance. In this study, the size of the population (136 students) limited the possibility of a
greater number of observations. Despite this, the sample size (120) is considered valid for
exploratory purposes, not only because it represents a majority of the population (88.2%),
but also because each resulting factor had more than 4 items and all the factor loadings
were greater than 0.5 (Worthington and Whittaker 2006).
It is also important to emphasise the exploratory nature of our objectives, resulting fac-
tors and conclusions, which require additional analysis under a confirmatory perspective
(CFA). Because the aim of this study was not to propose a new measurement scale, the
developed tool should be considered exploratory and a basis for the future construction of
an evaluation scale. Addressing this goal was not possible in this study because it would
have required a different sample (Hair et al. 2010), which could not be obtained given the
limitations regarding the size of the population.

13
Learning Environments Research (2020) 23:251–267 265

References
Abdullah, F. (2006). The development of HEdPERF: A new measuring instrument of service quality for the
higher education sector. International Journal of Consumer Studies, 30(6), 569–581. https​://doi.org/10
.1111/j.1470-6431.2005.00480​.x.
Adams, S., Cummins, M., Davis, A., Freeman, A., Ananthanarayanan, V., Hall-Giesinger, C. (2017). Hori-
zon Report 2017 Higher Education Edition. https​://doi.org/10.1002/ejoc.20120​0111
Ali, M., Hishamuddin, N., Tahir, L., & Said, M. (2016). Reinforcing teacher’s role in retaining students’ inter-
ests in discussing online in their learning process at Malaysian tertiary institutions. Journal of Theoretical
and Applied Information Technology, 93(2), 323–331.
Ally, M., & Prieto, J. (2014). What is the future of mobile learning in education? Revista de Universidad y
Sociedad Del Conocimiento (RUSC), 11(1), 142–151.
Annamdevula, S., & Bellamkonda, R. S. (2012). Development of HiEdQUAL for measuring service quality in
Indian higher education sector. International Journal of Innovation Management and Technology, 3(4),
412–416. https​://doi.org/10.7763/IJIMT​.2012.V3.265.
Arizabaleta, S., & Ochoa, A. F. (2016). Hacia una educación superior inclusiva en Colombia. Pedagogía y
Saberes, (45), 41–52. Retrieved from http://www.sciel​o.org.co/sciel​o.php?scrip​t=sci_artte​xt&pid=S0121​
-24942​01600​02000​05&lang=pt
Babakus, E., & Boller, G. W. (1992). An empirical assessment of the SERVQUAL scale. Journal of Business
Research, 24(3), 253–268. https​://doi.org/10.1016/0148-2963(92)90022​-4.
Camisón, C., Cruz, S., & González, T. (2007). Gestión de la calidad: Conceptos, enfoques, modelos y sistemas.
Madrid: Pearson Prentice Hall. ISBN 10: 84-205-4262-8. ISBN 13: 978-84-205-4262-1.
Carman, J. M. (1990). Consumer perceptions of service quality: An assessment of the SERVQUAL dimensions.
Journal of Retailing, 66(1), 33. https​://doi.org/10.1016/S0148​-2963(99)00084​-3.
Chiroleu, A. (2012). Políticas de Educación Superior en América Latina en el Siglo XXI: ¿Inclusión o Calidad?
Archivos Analíticos de Políticas Educativas, 20(13), 1–21.
Cronin, J. J., & Taylor, S. A. (1992). Measuring quality: A reexamination. Journal of Marketing, 56(3), 55–68.
https​://doi.org/10.2307/12522​96.
Estévez, J. A., Castro, J., & Granobles, H. R. (2015). La educación virtual en Colombia: Exposición de modelos
de deserción. Apertura: Revista de Innovación Educativa, 7(1). Retrieved from http://searc​h.ebsco​host.
com/login​.aspx?direc​t=true&db=ehh&AN=10288​8761&lang=es&site=ehost​-live
Fernandez, M. D., Ferrer, R., Reig, A., Albaladejo, N., & Walker, S. L. (2015). Validation of a Spanish version
of the distance education learning environment survey. Learning Environments Research, 18, 179–196.
https​://doi.org/10.1007/s1098​4-015-9179-0.
Ferreyra, M. M., Avitabile, C., Botero, J., Haimovich, F., & Urzúa, S. (2017). Momento decisivo: La educación
superior en América Latina y el Caribe. Washington, DC: Banco Mundial.
Flora, D. B., LaBrish, C., & Chalmers, R. P. (2012). Old and new ideas for data screening and assumption test-
ing for exploratory and confirmatory factor analysis. Frontiers in Psychology, 3(March), 1–21. https​://doi.
org/10.3389/fpsyg​.2012.00055​.
Galeeva, R. B. (2016). SERVQUAL application and adaptation for educational service quality assessments
in Russian higher education. Quality Assurance in Education, 24(3), 329–348. https​://doi.org/10.1108/
QAE-06-2015-0024.
Grönroos, C. (1984). A service quality model and its marketing implications. European Journal of Marketing,
18(4), 36–44. https​://doi.org/10.1108/EUM00​00000​00478​4.
Hair, J., Black, W., Babin, B., & Anderson, R. (2010). Multivariate data analysis (7th ed.). Upper Saddle River,
NJ: Pearson Prentice Hall.
Hayes, B. (2002). Measuring customer satisfaction. (S. Ediciones Gestión 2000, Ed.) (2002nd ed.).
Barcelona.
Icli, G. E., & Anil, N. K. (2014). The HEDQUAL scale: A new measurement scale of service quality for MBA
programs in higher education. South African Journal of Business Management, 45(3), 31–43. https​://doi.
org/10.1080/02572​117.1984.10586​532.
Islam, N., Beer, M., & Slack, F. (2015). E-learning challenges faced by academics in higher education: A lit-
erature review. Journal of Education and Training Studies, 3(5), 102–112. https​://doi.org/10.11114​/jets.
v3i5.947.
Kashif, M., Ramayah, T., & Sarifuddin, S. (2014). PAKSERV—Measuring higher education service quality in
a collectivist cultural context. Total Quality Management & Business Excellence, 27(3–4), 265–278. https​
://doi.org/10.1080/14783​363.2014.97693​9.
Ladhari, R. (2008). Alternative measures of service quality: A review. Managing Service Quality: An Interna-
tional Journal, 18(1), 65–86. https​://doi.org/10.1108/09604​52081​08428​49.
Ladhari, R. (2009). A review of twenty years of SERVQUAL research. International Journal of Quality and
Service Sciences, 1(2), 172–198. https​://doi.org/10.1108/17566​69091​09714​45.

13
266 Learning Environments Research (2020) 23:251–267

Latif, K. F., Latif, I., Farooq Sahibzada, U., & Ullah, M. (2017). In search of quality: Measuring higher educa-
tion service quality (HiEduQual). Total Quality Management and Business Excellence, 3363(June), 1–24.
https​://doi.org/10.1080/14783​363.2017.13381​33.
Lee, Y., Choi, J., & Kim, T. (2013). Discriminating factors between completers of and dropouts from online
learning courses. British Journal of Educational Technology, 44(2), 328–337. https​://doi.org/10.111
1/j.1467-8535.2012.01306​.x.
Lewis, R. C. & Booms, B. H. 1983. The marketing aspects of service quality. In L. Berry, G. Shostack &
G. Upah (Eds.), Emerging perspectives on services marketing (pp. 99–107). Chicago, IL: American
Marketing Association.
MacCallum, R. C. (2009). Factor analysis. In R. E. Millsapand & A. Maydeu-Olivares (Eds.), The
SAGE handbook of quantitative methods in psychology (pp. 123–147). Thousand Oaks, CA: SAGE
Publications.
Martínez, M., Blanco, M., & Castán, J. M. (2013). Dimensions of perceived service quality in higher educa-
tion virtual learning environments. RUSC Universities and Knowledge Society Journal, 10(1), 89–106;
268–285. https​://doi.org/10.7238/rusc.v10i1​.1411
Martinez, M. J., Castan, J. M., & Juan, A. A. (2010). Using the critical incident technique to identify factors of
service quality in online higher education. International Journal of Information Systems in the Service
Sector, 2(4), 56–71. https​://doi.org/10.4018/978-1-4666-0044-7.ch019​.
Martínez, L., & Martínez, J. (2007). Measuring perceived service quality in urgent transport service. Journal of
Retailing and Consumer Services, 14(1), 60–72. https​://doi.org/10.1016/j.jretc​onser​.2006.04.001.
Melo, L. A., Ramos, J. E., & Hernández, P. O. (2017). Higher education in Colombia: Current situation and
efficiency analysis. Desarrollo y Sociedad, 2017(78), 59–111. https​://doi.org/10.13043​/DYS.78.2.
MENa. (2013). Lineamientos Política de Educación Superior Inclusiva. Ministerio de Educación Nacional de
Colombia. Ministerio de Educación Nacional. https​://doi.org/10.1007/s1339​8-014-0173-7.2
Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A conceptual model of service quality and its implica-
tions for future research. Journal of Marketing, 49(4), 41–50. https​://doi.org/10.2307/12514​30.
Parasuraman, A., Zeithaml, V., & Berry, L. (1988). SERVQUAL—A multiple-item scale for measuring con-
sumer perceptions of service quality. Journal of Retailing, 64(1), 12–40.
Parasuraman, A., Zeithaml, V. A., & Malhotra, A. (2005). E-S-Qual. Journal of Service Research, 7(3), 213–
233. https​://doi.org/10.1177/10946​70504​27115​6.
Pérez, J., & Giraldo, L. (2014). What can’t be ignored in service quality evaluation: Application contexts, tools
and factors/Lo que no debe obviarse al evaluar la calidad del servicio: Contextos de aplicación, herramien-
tas y factores. Revista Facultad de Ingeniería Universidad de Antioquia, 2195575(72), 145–160.
Rama, C. (2012). Los caminos de las reformas. La virtualización universitaria en américa latina. Revista Histo-
ria de La Educación Latinoamericana, 14(19), 45–70.
Rodríguez, G., Lorduy, V., & Ariza, M. (2014). Quality of higher education for distance and virtual learn-
ing: An analysis of academic performance in Colombia. Investigación & Desarrollo, 22(1), 79–120.
Sabina, Đ., & Samira, F. (2015). Quality assessment in higher education using the Servqual Model. Manage-
ment, 20(1), 39–57. Retrieved from https​://www.efst.hr/manag​ement​/Vol20​No1-2015/3_Djonl​agic_Fazli​
c.pdf
Selim, H. M. (2007). Critical success factors for e-learning acceptance: Confirmatory factor models. Computers
& Education, 49(2), 396–413. https​://doi.org/10.1016/j.compe​du.2005.09.004.
Seth, N., Deshmukh, S. G., & Vrat, P. (2005). Service quality models: A review. International Journal of Qual-
ity & Reliability Management, 22(9), 913–949. https​://doi.org/10.1108/02656​71051​06252​11.
Teas, R. K. (1993). Expectations, performance evaluation, and consumers’ perceptions of quality. Journal of
Marketing, 57(October), 18–34.
Teeroovengadum, V., Kamalanabhan, T. J., & Seebaluck, A. K. (2016). Measuring service quality in higher
education. Quality Assurance in Education, 24(2), 244–258. https​://doi.org/10.1108/QAE-06-2014-0028.
Torres, E., & Araya, L. (2010). Construcción de una escala para medir la calidad del servicio de las universi-
dades: Una aplicación al contexto chileno. Revista de Ciencias Sociales, 16(1), 54–67. Retrieved from
http://www.scopu​s.com/inwar​d/recor​d.url?eid=2-s2.0-78650​53128​8&partn​erID=tZOtx​3y1
Udo, G. J., Bagchi, K. K., & Kirs, P. J. (2011). Using SERVQUAL to assess the quality of e-learning experi-
ence. Computers in Human Behavior, 27(3), 1272–1283. https​://doi.org/10.1016/j.chb.2011.01.009.
Uppal, M. A., Ali, S., & Gulliver, S. R. (2017). Factors determining e-learning service quality. British Journal
of Educational Technology, 49(3), 412–426. https​://doi.org/10.1111/bjet.12552​.
van Dyke, T. P., Kappelman, L. A., & Prybutok, V. R. (1997). Measuring information systems service qual-
ity: Concerns on the use of the SERVQUAL Questionnaire. MIS Quarterly, 21(2), 195. https​://doi.
org/10.2307/24941​9.

13
Learning Environments Research (2020) 23:251–267 267

Walker, S., & Fraser, B. J. (2005). Development and validation of an instrument environments in higher educa-
tion: The Distance Education Learning Environment Survey (DELES). Learning Environments Research,
8, 289–308. https​://doi.org/10.1007/s1098​4-005-1568-3.
Worthington, R. L., & Whittaker, T. A. (2006). Scale development research:a content analysis and recommen-
dations for best practices. The Counseling Psychologist, 34(6), 806–838. https​://doi.org/10.1177/00110​
00006​28812​7.

Publisher’s Note  Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.

13

You might also like