You are on page 1of 16

Service Quality: A Measure of Information Systems Effectiveness

Author(s): Leyland F. Pitt, Richard T. Watson and C. Bruce Kavan


Source: MIS Quarterly, Vol. 19, No. 2 (Jun., 1995), pp. 173-187
Published by: Management Information Systems Research Center, University of Minnesota
Stable URL: http://www.jstor.org/stable/249687
Accessed: 12-12-2016 11:33 UTC

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted
digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about
JSTOR, please contact support@jstor.org.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms

Management Information Systems Research Center, University of Minnesota is


collaborating with JSTOR to digitize, preserve and extend access to MIS Quarterly

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

was assessed in three different types of organi-


Service Quality: A zations in three countries. After examination of

Measure of content validity, reliability, convergent validity,


nomological validity, and discriminant validity,
Information Systems the study concludes that SERVQUAL is an
appropriate instrument for researchers seeking
Effectiveness a measure of IS service quality.

Keywords: IS management, service quality,


measurement

ISRL Categories: A104, E10206.03, GA03,


By: Leyland F. Pitt
GB02, GB07
Henley Management College
Greenlands
Henley on Thames, RG9 3AU
Oxfordshire Introduction
United Kingdom The role of the IS department within the organi-
leylandp@henleymc.ac.uk zation has broadened considerably over the last
decade. Once primarily a developer and opera-
Richard T. Watson tor of information systems, the IS department
Department of Management now has a much broader role. The introduction

University of Georgia of personal computers results in more users of


Athens, GA 30602-6256 information technology interacting with the IS
U.S.A. department more often. Users expect the IS de-
partment to assist them with a myriad of tasks,
rwatson@uga.cc.uga.edu such as hardware and software selection, instal-
lation, problem resolution, connection to LANs,
C. Bruce Kavan
systems development, and software education.
Department of Management, Facilities such as the information center and
Marketing, and Logistics help desk reflect this enhanced responsibility. IS
University of North Florida departments now provide a wider range of serv-
4567 St. John's Bluff Road ices to their users.1 They have expanded their
Jacksonville, FL 32216 roles from product developers and operations
U.S.A. managers to become service providers.
bkavan@unflvm.cis.unf.edu IS departments have always had a service role
because they assist users in converting data
into information. Users frequently seek reports
Abstract that sort, summarize, and present data in a form
that is meaningful for decision making. The con-
The IS function now includes a significant ser- version of data to information has the typical
vice component. However, commonly used mea- characteristics of a service. It is often a custom-
sures of IS effectiveness focus on the products, ized, personal interaction with a user. However,
rather than the services, of the IS function. service rarely appears in the vocabulary of the
Thus, there is the danger that IS researchers traditional systems development life cycle. IS
will mismeasure IS effectiveness if they do not textbooks (e.g., Alter, 1992; Laudon and Laudon,
include in their assessment package a measure 1991) describe the final phase of systems de-
of IS service quality. SERVQUAL, an instrument velopment as maintenance. They state
developed by the marketing area, is offered as a
possible measure of IS service quality.
SERVQUAL measures service dimensions of tan- 1 We prefer "client" to the more common "user"-it draws more
gibles, reliability, responsiveness, assurance, attention to the service-mission aspect of IS departments.
However, because "user" is so ingrained in the IS
and empathy. The suitability of SERVQUAL community, we have stuck with it.

MIS Quarterly/June 1995 173

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

that the key deliverables include operations categories are labeled as production and prod-
manual, usage statistics, enhancement re- uct, respectively (Mason, 1978). Since system
quests, and bug-fix requests. The notion quality and information quality precede other
that IS departments are service providers is measures of IS success, existing measures
not well-established in the IS literature. seem strongly product focused. This is not sur-
A recent review identifies six measures of IS prising given that many of the studies providing
success (Delone and McLean, 1992). In recog-the empirical basis for the categorization are
based on data collected in the early 1980s,
nition of the expanded role of the IS department,
when IS was still in the mainframe era.
we have augmented this list to include service
quality as a measure of IS success. This paper As the introduction pointed out, the IS depart-
discusses the appropriateness of SERVQUAL ment is not just a provider of products. It is also
to assess IS service quality. The instrument was a service provider. Indeed, this may be its major
originally developed by marketing academics to function. This recognition has been apparent for
assess service quality in general. some years: The quality of the IS department's
service, as perceived by its users, is a key indi-
cator of IS success (Moad, 1989; Rockart,
1982). The principle reason IS departments
measure user satisfaction is to improve the
Measuring IS Effectiveness quality of service they provide (Conrath and
Mignen, 1990). A significantly broader perspec-
Organizations typically have many stakeholders
tive claims that quality and service are key
with multiple and conflicting objectives of vary-
measures of white-collar productivity (Deming,
ing time horizons (Cameron and Whetton, 1983;
1981-82). For Deming, quality is a super ordi-
Hannan and Freeman, 1977; Quinn and
nate goal.
Rohrbaugh, 1983). There is rarely a single,
common objective for all stakeholders. Thus, Product and system suppliers are in the service
measuring the success of most organizational business, albeit less so than service firms. Virtu-
endeavors is problematic. IS is not immune to ally all tangible products have intangible attrib-
this problem. IS effectiveness is a multidimen- utes, and all services possess some properties
sional construct, and there is no single, over- of tangibility. There is a perspective in the mar-
arching measure of IS success (DeLone and keting literature (Berry and Parasuraman, 1991,
McLean, 1992). Consequently, multiple meas- p.9; Shostack, 1977) that argues the inappropri-
ures are required, and DeLone and McLean ateness of a rigid goods-services classification.
identify six categories into which these meas- Shostack (1977) asserts that when a customer
ures can be grouped, namely system quality, in- buys a tangible product (e.g., a car) he or she
formation quality, use, user satisfaction, also buys a service (e.g., transportation). In
individual impact, and organizational impact. many cases, a product is only a means of ac-
The categories are linked to define an IS suc- cessing a service. Personal computer users do
cess model (see Figure 1). not just want a machine; rather they seek a sys-
tem that satisfies their personal computing
The basis of the DeLone and McLean categori-
needs. Thus, the IS department's ability to sup-
zation-Shannon and Weaver's (1949) theory
ply installation assistance, product knowledge,
of communication-is product oriented. For ex-
software training and support, and online help is
ample, system quality describes measures of
a factor that will have an impact on the relation-
the information processing system. Most meas-
ship between IS and users.
ures in this category tap engineering-oriented
performance characteristics (DeLone and Goods and services are not neatly compartmen-
McLean, 1992). Information quality represents talized. They exist along a spectrum of tangibil-
measures of information systems output. Typi- ity, ranging from relatively pure products, to
cal measures in this area include accuracy, pre- relatively pure services, with a hybrid some-
cision, currency, timeliness, and reliability of where near the center point. Current IS success
information provided. In an earlier study, also measures, product and system quality, focus on
based on the theory of communication, these the tangible end of the spectrum. We argue that

174 MIS Quarterly/June 1995

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

Figure 1. Augmented IS Success Model (Adapted from DeLone and McLean, 1992)

service quality, the other end of the spectrum, information systems, the quality of service can
needs to be considered as an additional meas- influence use and user satisfaction.
ure of IS success. DeLone and McLean's model
In summary, multiple instruments are required
needs to be augmented to reflect the IS depart-
to assess IS effectiveness. The current selection
ment's service role. As the revised model shows
ignores service quality, an increasingly impor-
(see Figure 1), service quality affects both use
tant facet of the IS function. If IS researchers
and user satisfaction.
disregard service quality, they may gain an inac-
There are two possible units of analysis for IS curate reading of overall IS effectiveness. We
service quality-the IS department and a par- propose that service quality should be included
ticular information system. In those cases where in the researcher's armory of measures of IS
users predominantly interact with one system effectiveness.
(e.g., sales personnel taking telephone orders),
a user's impression of service quality is based
almost exclusively on one system. In this case,
the unit of analysis is the information system. In
situations where users have multiple interac- Measuring Service Quality
tions with the IS department (e.g., a personnel Service quality is the most researched area of
manager using a human resources manage- services marketing (Fisk, et al., 1993). The con-
ment system, electronic mail, and a variety of cept was investigated in an extensive series of
personal computer products), the unit of analy- focus group interviews conducted by Paras-
sis can be a particular system or the IS depart- uraman, et al. (1985). They conclude that serv-
ment. For the user of multiple systems, ice quality is founded on a comparison between
however, this distinction may be irrelevant. what the customer feels should be offered and
Thus, for example, the user who finds it difficult what is provided. Other marketing researchers
to get a personal computer repaired is con- (Gronroos, 1982; Sasser, et al., 1978) also sup-
cerned not with poor service for a designated port the notion that service quality is the discrep-
system but with poor service in general from the ancy between customers' perceptions and expec-
IS department. Thus, while system quality and tations. There is support for this argument in the
information quality may be closely associated IS literature. Conrath and Mignen (1990) report
with a particular software product, this is not al- that the second most important component of
ways the case with service quality. Irrespective user satisfaction, after general quality of service,
of whether a user interacts with one or multiple is the match between users' expectations and

MIS Quarterly/June 1995 175

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

actual IS service. Rushinek and Rushinek expecta


(1986) conclude that fulfilled user expectations
ters. Fo
have a strong effect on overall satisfaction. desk fr
to expe
Users' expressions of what they want are re-
factors
vealed by their expectations and their percep-
that or
tions of what they think they are getting.
Parasuraman and his colleagues (Paras- Anothe
uraman, et al., 1985; 1988; 1991; Zeithaml, et IS depa
al., 1990) suggest that service quality can be as- commu
sessed by measuring customers' expectations ticular,
and perceptions of performance levels for a pectati
range of service attributes. Then the difference are reliant on IS to convert their needs into a
between expectations and perceptions of actual system. In the process, IS creates an expecta-
performance can be calculated and averaged tion as to what the finished system will do and
across attributes. As a result, the gap between how it will appear. It would seem that too fre-
expectations and perceptions can be measured. quently IS misinterprets users' requirements or
gives users a false impression of the outcome
The prime determinants of expected service
because many systems fail to meet users' ex-
quality, as suggested by Zeithaml, et al. (1990),
pectations. For example, Laudon and Laudon
are word-of-mouth communications, personal
(1991) report studies that show IS failure rates
needs, past experiences, and communications
of 35 percent and 75 percent.
by the service provider to the user. Users talk to
each other and exchange stories about their re- In the IS sphere at least, users are subject to
lationship with the IS department. These conver- another source of communication not appearing
sations are a factor in fashioning users' in Zeithaml, et al.'s (1990) model. Vendors often
expectations of IS service. Users' personal influence users' expectations. Users may read
needs influence their expectation of IS service. vendors' advertisements, attend presentations,
A manager's need for urgency may differ de- or even receive sales calls. Vendors, in trying to
pending on whether he or she has a PC failure sell their products, often raise expectations by
the day before an annual presentation, or simply parading the positive features of their wares and
wants a new piece of software installed. Of downplaying issues such as systems conversion,
course, prior experience is a key molder of ex- compatibility, or integration with existing systems.
pectations. Users may adjust or raise their IS has no control over vendors' communications

User IS department
I I

Word-of mouth Persona al Past


communications needs experiences

Vendors I I
IS
Vendor Expected
communications
communications service

t
Gap |

I
Perceived
service

Figure 2. Determinants of Users' Expectations

176 MIS Quarterly/June 1995

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

and must recognize that they are an ever pre- The final part is a single question to assess
sent force shaping expectations. This is not nec- overall service quality. Underlying the 22 items
essarily bad. Vendors' communications can be are five dimensions that the authors claim are
a positive force for change when they make us- used by customers when evaluating service
ers aware of what they should expect from IS. quality, regardless of the type of service. These
dimensions are:
The forces influencing users' expectations are
shown in Figure 2. The difference between ex- * Tangibles: Physical facilities, equipment,
pected service and IS's perceived service is de- and appearance of personnel.
picted as a gap-the discrepancy between what * Reliability: Ability to perform the promised ser-
users expect and what they think they are get- vice dependably and accurately.
ting. If IS is to address this disparity it needs * Responsiveness: Willingness to help customers
some way of assessing users' expectations and and provide prompt service.
perceptions, and measuring the gap.
* Assurance: Knowledge and courtesy of
employees and their ability to
Parasuraman, et al. (1988) operationalized their
inspire trust and confidence.
conceptual model of service quality by following
the framework of Churchill (1979) for developing * Empathy: Caring, individualized attention
the service provider gives its
measures of marketing constructs. They started
customers.
by making extensive use of focus groups, who
identified 10 potentially overlapping dimensions
Service quality for each dimension is captured
of service quality. These dimensions were used
by a difference score G (representing perceived
to generate 97 items. Each item was then quality for that item), where
turned into two statements-one to measure
G=P-E
expectations and one to measure perceptions.
A sample of service users was asked to rate and P and E are the average ratings of
each item on a seven-point scale anchored on sion's corresponding perception and ex
strongly disagree (1) and strongly agree (7). tion statements respectively.
This initial questionnaire was used to assess the
service-quality perceptions of customers who
Because service quality is a significant
had recently used the services of five different
marketing, SERVQUAL has been subject to
types of service organizations (essentially fol-
considerable debate (e.g., Brown, et al., 1993;
lowing Lovelock's (1983) classification). The 97-
item instrument was then purified and
Parasuraman, et al., 1993) regarding its dimen-
sionality and the wording of items (Fisk, et al.,
condensed by first focusing on the questions
1993). Nevertheless, after examining seven
clearly discriminating between respondents hav-
studies, Fisk, et al. conclude that researchers
ing different perceptions, and second, by focus-
generally agree that the instrument is a good
ing on the dimensionality of the scale and
predictor of overall service quality.
establishing the reliability of its components.
Parasuraman, et al. (1991) argue that SERVQUAL's
Parasuraman, et al.'s work resulted in a 45-item items measure the core criteria of service qual-
instrument, SERVQUAL, for assessing cus- ity. They assert that it transcends specific func-
tomer expectations and perceptions of service tions, companies, and industries. They suggest
quality in service and retailing organizations. that context-specific items may be used to supple-
The instrument has three parts (see the Appen- ment the measurement of the core criteria. In
dix). The first part consists of 22 questions for this case, we found some slight rewording of
measuring expectations. Questions are framed one item was required to measure IS service
in terms of the performance of an excellent quality. The first question was originally asked in
provider of the service being studied. The sec- terms of "up-to-date equipment." We changed
ond part consists of 22 questions for measuring the wording to "up-to-date hardware and software"
perceptions. Questions are framed in terms of because equipment could be perceived as refer-
the performance of the actual service provider. ring only to hardware.

MIS Quarterly/June 1995 177

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

Assessment of 181 usable responses were received by the cut-


off date, for a response rate of 36.2 percent.
SERVQUAL's Validity
In the information services business, respon-
Before SERVQUAL can be used as a measure
dents were internal users of the information sys-
of IS effectiveness, it is necessary to assess its
tems department. The questionnaires were
validity in an IS setting. Data for validity assess-
dispatched by means of the internal mail sys-
ment were collected in three organizations-a
tem, and 267 usable responses were received
large South African financial institution (n=237),
by the cutoff date. The response rate was 68
a large British accounting information manage-
percent.
ment consulting firm (n=181), and a U.S. infor-
mation services business (n=267). In each case,Parasuraman, et al.'s (1988) construct validity
the standard SERVQUAL questionnaire was appraisalad- of SERVQUAL guided the assess-
ministered, with minor appropriate changes, ment
to of the validity of SERVQUAL for measur-
internal computer users in all firms (see theing IS service quality. This section discusses
Ap-
pendix). Respondents were also required to content validity, reliability, convergent validity,
give an overall rating of the IS department's and nomological and discriminant validity.
service quality on a seven-point scale, ranging
from 1 through 7 (1 = very poor, 7 = excellent).
Content validity
In the financial institution, the respondents were
users of online production systems. Potential re- Content validity refers to the extent to which an
spondents were identified from a security profile instrument covers the range of meanings in-
list of all people having access to production cluded in the concept (Babbie, 1992; p. 133).
systems. A total of 494 questionnaires were dis- Parasuraman, et al. (1988) used focus groups
tributed to personnel in the head office and six to determine the dimensions of service quality
branches of the financial institution. The response and then a two-stage process was used to re-
rate was 48 percent (237 usable questionnaires). fine the instrument. Their thoroughness sug-
gests that SERVQUAL does measure the
In the accounting and consulting firm, respon- concept of service quality. We could not discern
dents were internal users of the information sys- any unique features of IS that make the dimen-
tems department throughout the organization. sions underlying SERVQUAL (tangibles, reliabil-
The questionnaires were dispatched to 500 ity, responsiveness, assurance, and empathy)
users by means of the internal mail system, and inappropriate for measuring IS service quality or

Table 1. Reliability of the SERVQUAL


by Dimension
Average Reliability
Coefficient Across
Four Service
Reliability Coefficients Companies (source:
Financial Consulting Information Parasuraman, et al.,
Dimension Number of Items Institution Firm Service 1988)
Sample size 237 181 267
Tangibles 4 0.62 0.65 0.73 0.61
Reliability 5 0.87 0.86 0.94 0.79
Responsiveness 4 0.66 0.82 0.96 0.72
Assurance 4 0.79 0.83 0.91 0.84
Empathy 5 0.75 0.82 0.90 0.75

Realiability of 0.90 0.94 0.96 0.89


linear
combination

178 MIS Quarterly/June 1995

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

excluding some meaning of service quality in concepts. Thus, in the ideal case, exact repro-
the IS domain. duction of the five-factor model would indicate
both nomological and discriminant validity.

The data from the three studies were analyzed


Reliability
independently using the method suggested by
The reliability of each of the dimensions was as- Johnson and Wichern (1982) to determine the
sessed using Cronbach's (1951) alpha (see Ta- number of factors to extract. Essentially, princi-
ble 1). Reliabilities vary from 0.62 to 0.96, and pal components and maximum likelihood meth-
the reliability of the linear combination (Nunally, ods with varimax rotation were tried and
1978) is 0.90 or above in each case. These val- compared for a range of models.
ues compare very favorably with the average
reliabilities of SERVQUAL for four service com-
Financial Institution
panies. The tangibles dimension provides the
most concern because two of the three reliability The analysis suggested that a seven-factor
measures are below the 0.70 level required for model, explaining 68 percent of the variance, is
commercial applications (Carman, 1990). appropriate (see Table 3). The seven-factor
model splits the suggested factor of tangibles
into two parts. One part (G12) deals with the
Convergent validity state of hardware and software, and the other
To assess convergent validity, the correlation (G2 - G4) concerns physical appearances. This
between the overall service quality index, com- is not surprising for an MIS environment be-
puted from the five dimensions, was compared cause clearly, hardware and software are quite
to the response for a single question on the IS distinct from physical appearance. The sug-
department's overall quality (see Table 2). The gested empathy factor also splits into two parts.
high correlation between the two measures indi- One part (G18 - G19) concerns personal atten-
cates convergent validity. tion, and the other (G20 - G22) is focused on
more broadly based customer needs. This was
Table 2. Correlation of Service Quality also observed by Parasuraman, et al. (1991).
Index With Single-Item Overall Measure Reliability, responsiveness, and assurance load
Financial Consulting Information close to expectations.
Institution Firm Service
Correlation 0.60 0.82 0.60
coefficient
Consulting Firm
p value <0.0001 <0.0001 <0.0001
Analysis indicated that five factors should be ex-
tracted, and these explain 66 percent of the vari-
Nomological and discriminant
ance (see Table 4). There is an acceptable
validity correspondence between expected and actual
loadings for tangibles, reliability, assurance, and
Nomological validity refers to an observed rela-
empathy. In each case, all but one of the items
tionship between measures purported to assess
in the group load together on the factor (e.g., for
different but conceptually related constructs. If
reliability, items 5-8 load together but not item
two constructs (C1 and C2) are conceptually re-
9). Responsiveness is problematic because only
lated, evidence that purported measures of each
two of the items load together.
(M1 and M2) are related is usually accepted as
empirical support for the conceptual relation-
ship. Nomological validity is indicated if items Information Services Business
expected to load together in a factor analysis,
actually do so (Carman, 1990). Analysis indicated that three factors, explaining
71 percent of the variance, should be extracted
Discriminant validity is evident if items underly-
ing each dimension load as different factors.
2 G1 refers to E1 - P1 where E1 is expectation question 1 and
The dimensions are then measuring different P1 is perception question 1.

MIS Quarterly/June 1995 179

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

Table 3. Exploratory Factor Analysis-Financial Institution


(Principal Components Method With Varimax Rotation; Loadings > .55*)
Dimensions F1 F2 F3 F4 F5 F6 F7
Tangibles G1 0.78
G2 0.81
G3 0.73
G4 0.64
Reliability G5 0.75
G6 0.75
G7 0.70
G8 0.80
G9 0.69
Responsiveness G10
G1l 0.61
G12 0.77
G13 0.74
Assurance G14 0.80
G15 0.75
G16 0.55
G17
Empathy G18 0.87
G19 0.82
G20 0.75
G21 0.69
G22 0.63

* The cutoff po
of items in the

Table 4. E
(Principal
Dimensions F1 F2 F3 F4 F5
Tangibles G1 0.78
G2 0.83
G3 0.57
G4 0.70
Reliability G5 0.85
G6 0.57
G7 0.76
G8 0.80
G9
Responsiveness G10 0.69
G11 0.60
G12 0.61
G13
Assurance G14 0.67
G15 0.55 0.63
G16 0.82
G17 0.56
Empathy G18
G19 0.77
G20 0.55
G21 0.65
G22 0.55

180 MIS Quarterly/June 1995

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

Table 5. Exploratory Factor Analysis--nformation Services Business


(Principal Components Method With Varimax Rotation; Loadings > .55)
Dimensions F1 F2 F3
Tangibles G1 0.60
G2 0.78
G3 0.77
G4 0.85
Reliability G5 0.86
G6 0.78
G7 0.76
G8 0.79
G9 0.67
Responsiveness G10 0.67
G11 0.58 0.63
G12 0.73
G13 0.70
Assurance G14 0.67
G15 0.64
G16 0.75
G17 0.72
Empathy G18 0.80
G19
G20 0.80
G21 0.78
G22 0.74

(see Table 5). The factor loadings are reason- In summary, SERVQUAL passes content, reli-
ably consistent with the suggested model. Reli- ability and convergent validity examination.
ability and assurance load as expected. There are some uncertainties with nomological
Tangible, responsiveness, and empathy miss by
and discriminant validity, but not enough to dis-
one item in each case. continue consideration of SERVQUAL. It is a
suitable measure of IS service quality.

Comparison of the factor analyses


Limitations
An examination of the three factor analyses indi-
Potential users of SERVQUAL should be cau-
cates support for nomological validity. Of the 15
loadings examined, there are three exact fitstious.
(all The reliability of the tangibles construct is
variables of a dimension loading together), 10low. Although this is also a problem with the
near fits (all but one variable of a dimensionoriginal instrument, it cannot be ignored. The
whole
loading together), and two poor fits (two or more issue of tangibles in an IS environment
variables of a dimension not loading). There probably
are needs further investigation. It may be
some problems with discriminant validity be- appropriate to split tangibles into two dimen-
cause some factors do not appear to be differ-sions: appearance and hardware/software. Be-
cause hardware and software can have a
ent from one another. This is particularly
noticeable in the case of the information serv- significant impact, a measure of IS service q
ices business, where responsiveness, assur- ity possibly needs further questions to tap t
dimensions.
ance, and empathy load as one factor. It may be
that some of these concepts are too close to- SERVQUAL does not always clearly discrimi-
gether for respondents to differentiate. Concepts nate among the dimensions of service quality.
like responsiveness and empathy are similar; in Researchers who use SERVQUAL to discrimi-
his Thesaurus, Roget twice puts them together nate the impact of service changes should be
in the classifications of sensation and feeling wary of using it to distinguish among the closely
(Roget, 1977). aligned concepts of responsiveness, assurance,

MIS Quarterly/June 1995 181

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

and empathy. These concepts are not semanti- provides valuable information for redirecting IS
cally distant, and there appear to be situations staff toward providing higher-quality service to
where users perceive them very similarly. users. Because SERVQUAL is a general meas-
ure of service quality, it is well-suited to bench-
marking (Camp, 1989), which establishes goals
based on best practice. Thus, IS managers can
potentially use SERVQUAL to benchmark their
IS Research and Service
service performance against service industry
Quality leaders such as airlines, banks, and hotels.
Service quality can be both an independent and
dependent variable of IS research. The aug-
mented IS success model (see Figure 1) indi-
cates that IS service quality is an antecedent of Future Research
use and user satisfaction. Because IS now has
We see three possible directions for future re-
an important service component, IS researchers
search on service quality. First, Q-method could
may be interested in identifying which manage-
be used to examine service quality from a differ-
rial actions raise service level. In this situation,
ent perspective. Second, the relationship be-
service quality becomes a dependent variable.
tween customer service life cycle and service
Furthermore, service quality may be part of a
quality could be examined. Third, the relative
causal model. Consider a study where a service
importance of the determinants of expected
policy intervention (e.g., a departmental help
service could be investigated.
desk) is designed to increase personal com-
puter use. Figure 1 suggests that service quality
would be a mediating variable in this situation.
Q-method
SERVQUAL, as with most questionnaires of this
kind, does not require respondents to express a
preference for one service characteristic over
IS Practice and Service another (e.g., reliability over assurance). They
Quality can rate items comprising both dimensions at
the high end of the scale. However, organiza-
Any instrument advocated as a measure of IS
tions frequently have to make a choice when al-
success should first be validated before applica-
locating scarce resources. For example,
tion. This study provides evidence that practi-
managers need to know whether they should
tioners can, with considerable confidence, use
SERVQUAL as a measure of IS success. Our put more effort into reliability or empathy. This
issue can be addressed by asking respondents
research subsequent to this study (e.g., Pitt and
to allocate relative dimension importance on a
Watson, 1994) has focused on the application of
constant sum scale (e.g., 100 points). Zeithaml,
SERVQUAL as a diagnostic tool. We have com-
et al. (1990) recommend this approach, which
pleted longitudinal studies in two organizations.
results in a "weighted gap score" by dimension
In each case, service quality was initially meas-
and in total. Another approach to identifying
ured, then IS management took steps to im-
preferences is Q-method (Brown, 1980;
prove service, and service quality was
Stephenson, 1953), which can identify a prefer-
remeasured 12 months later. In both cases, sig-
ence structure and indicate patterning within
nificant improvements in service quality were
groups of users.
detected. These organizations have embarked
on a program of continuous incremental im- We intend to use Q-method to gain insights into
provement of IS service quality and use users' preference structure for the dimensions
SERVQUAL to measure their progress. Our ex- of service and to discover if there is a single uni-
perience shows that IS practitioners accept form profile of service expectations, or there are
service quality as a useful, valid measure of IS classes of users with different expectations. We
effectiveness, and they find that SERVQUAL propose to use the questions from SERVQUAL

182 MIS Quarterly/June 1995

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

and recast them for Q-method. Subjects will be tems, an IS department performs other functions
such as responding to questions about software
asked to sort the items twice: first, on the basis
of an ideal service provider-their expectations; products, providing training, and giving equip-
second, on the basis of the actual service ment advice. In each of these cases, the user's
provider-their perceptions. This is very similargoal is to acquire information. Thus, the IS de-
to a technique described by Stephenson (1953), partment delivers information through both
the father of Q-method, in his study of studenthighly structured information systems and cus-
achievement and motivation. tomized personal interactions. Clearly, providing
information is a fundamental service of an IS de-
partment, and it should be concerned with the
Customer service life cycle quality of service it delivers. Thus, we believe,
the effectiveness of an IS unit can be partially
The customer service life cycle, a variation on
assessed by its capability to provide quality
the customer resource life cycle (Ives and Lear-
service to its users.
month, 1984; Ives and Mason, 1990), breaks
down the service relationship with a customer The major contribution of this study is to demon-
into four major phases: requirements, acquisi- strate that SERVQUAL, an extensively applied
tions, stewardship, and retirement. It is highly marketing instrument for measuring service
likely users' expectations differ among these quality, is applicable in the IS arena. The pa-
phases. Empathy might be the major need dur- per's other contributions are highlighting the
ing requirements and reliability during steward- service component of the IS department, aug-
ship. Thus, examining service quality by the menting the IS success model, presenting a
customer service life cycle phase is an opportu- logical model for user's expectations, and giving
nity for future research. some directions for future research.

Determinants of expected service


Acknowledgements
The model shown in Figure 2 indicates there are
five determinants of expected service. The rela- We gratefully acknowledge the insightful com-
tive influence of each of these variables can be ments of the editor, associate editor, and anony-
measured (see Boulding, et al., 1993; Zeithaml, mous reviewers of MISQ. We thank Neil Lilford
et al., 1993). Again, the marketing literature pro-of The Old Mutual, South Africa for his assistance.
vides a suitable starting point, because discov-
ering what influences customers is a major
theme of marketing research. References
Alter, S.L. Information Systems: A Management
Perspective, Addison-Wesley, Reading, MA,
1992.
Conclusion
Babbie, E. The Practice of Social Research (6th
The traditional goal of an IS organization is ed.),
to Wadsworth, Belmont, CA, 1992.
build, maintain, and operate information deliveryBerry, L.L. and Parasuraman, A. Marketing
systems. Users expect efficient and effective Services:
de- Competing Through Quality, Free
livery systems. However, for the user, the goal Press, New York, NY, 1991.
is not the delivery system, but rather the infor-Boulding, W., Kalra, A., Staelin, R., and
mation it can provide. For example, a user may Zeithaml, V.A. "A Dynamic Model of Service
want sales reported by geographic region. A Quality: From Expectations to Behavioral In-
computer-based information system is one alter- tentions," Journal of Marketing Research
native for satisfying that need. The same report (30:1), February 1993, pp. 7-27.
could be produced manually. For the user, the Brown, T.J., Churchill, G.A.J., and Peter, J.P.
information is paramount and the delivery "Improving the Measurement of Service
mechanism secondary. In addition to developing Quality," Journal of Retailing (69:1), Spring
and operating computer-based information sys- 1993, pp. 127-139.

MIS Quarterly/June 1995 183

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

Cameron, K.S. and Whetton, D.A. Organiza- Johnson, R.A. and Wichern, D.W. Applied Multi-
tional Effectiveness: A Comparison of Multi- variate Statistical Analysis, Prentice-Hall,
ple Models, Academic Press, New York, NY, Englewood Cliffs, NJ, 1982.
1983. Laudon, K.C. and Laudon, J.P. Management In-
Camp, R.C. Benchmarking: The Search for In- formation Systems: A Contemporary Per-
dustry Best Practices that Lead to Superior spective (2nd ed.), Macmillan, New York,
Performance, Quality Press, Milwaukee, WI, NY, 1991.
1989. Lovelock, C.H. "Classifying Services to Gain
Carman, J.M. "Consumer Perceptions of Serv- Strategic Marketing Insights," Joumal of Mar-
ice Quality: An Assessment of SERVQUAL keting (47), Summer 1983, pp. 9-20.
Dimensions," Journal of Retailing (66:1), Mason, R.O. "Measuring Information Output: A
Spring 1990, pp. 33-53. Communication Systems Approach," Infor-
Churchill, G.A. "A Paradigm for Developing Bet- mation & Management (1:5), 1978, pp. 219-
ter Measures of Marketing Constructs," Jour- 234.
nal of Marketing Research (16), February Moad, J. "Asking Users to Judge IS," Datama-
1979, pp. 64-73 tion (35:21), November 1, 1989, pp. 93-100.
Conrath, D.W. and Mignen, O.P. "What is Being Nunnally, J.C. Psychometric Theory (2nd ed.),
Done to Measure User Satisfaction with McGraw-Hill, New York, NY, 1978.
EDP/MIS," Information & Management Parasuraman, A., Zeithaml, V.A., and Berry, L.L.
(19:1), August 1990, pp. 7-19. "A Conceptual Model of Service Quality and
Cronbach, L.J. "Coefficient Alpha and the Inter- Its Implications for Future Research," Journal
nal Structure of Tests," Psychometrika of Marketing (49), Fall 1985, pp. 41-50.
(16:3), September 1951, pp. 297-333. Parasuraman, A., Zeithaml, V.A., and Berry, L.L.
DeLone, W.H. and McLean, E.R. "Information "SERVQUAL: A Multiple-item Scale for
Systems Success: The Quest for the De- Measuring Consumer Perceptions of Service
pendent Variable," Information Systems Re- Quality," Journal of Retailing (64:1), Spring
search (3:1), March 1992, pp. 60-95. 1988, pp. 12-40.
Deming, W.E. "Improvement of Quality and Pro- Parasuraman, A., Berry, L.L., and Zeithaml, V.A.
ductivity Through Action by Management," "Refinement and Reassessment of the
National Productivity Review, Winter 1981- SERVQUAL Scale," Journal of Retailing
82, pp. 12-22. (67:4), Winter 1991, pp. 420-450.
Fisk, R.P., Brown, S.W., and Bitner, M.J. Parasuraman, A., Berry, L.L., and Zeithaml, V.A.
"Tracking the Evolution of the Services Mar- "More on Improving the Measurement of
keting Literature," Joural of Retailing (69:1), Service Quality," Journal of Retailing (69:1),
Spring 1993, pp. 61-103. Spring 1993, pp. 140-147.
Gronroos, C. Strategic Management and Mar- Pitt, L.F. and Watson, R.T. Longitudinal Meas-
keting in the Service Sector, Swedish School urement of Service Quality in Information
of Economics and Business Administration, Systems: A Case Study, Proceedings of
Helsingfors, Finland, 1982. the Fifteenth International Conference on In-
Hannan, M.T. and Freeman, J. "Obstacles to formation Systems, Vancouver, B.C., 1994.
Comparative Studies," In New Perspectives Quinn, R.E. and Rohrbaugh, J. "A Spatial Model
on Organizational Effectiveness, P.S. Good- of Effectiveness Criteria: Towards a Compet-
man and J.M. Pennings (eds.), Jossey-Bass, ing Values Approach to Organizational
San Francisco, CA, 1977, pp. 106-131. Analysis," Management Science (29:3),
Ives, B. and Learmonth, G.P. "The Information March 1983, pp. 363-377.
Systems as a Competitive Weapon," Com- Rockart, J.F. "The Changing Role of the Infor-
munications of the ACM (27:12), December mation Systems Executive: A Critical Suc-
1984, pp. 1193-1201. cess Factors Perspective," Sloan Manage-
Ives, B. and Mason, R. "Can Information Tech- ment Review (24:1), Fall 1982, pp. 3-13.
nology Revitalize Your Customer Service?" Roget, P.M. Roget's International Thesaurus,
Academy of Management Executive (4:4), revised by Chapman, R.L. (4th ed.), Thomas
November 1990, pp. 52-69. Y. Crowell, New York 1977.

184 MIS Quarterly/June 1995

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

Rushinek, A. and Rushinek, S.F. "What Makes and executive programs in the U.S., Australia,
Users Happy?" Communications of the ACM Singapore, South Africa, France, Malta, and
(29:7), July 1986, pp. 594-598. Greece. His publications have been accepted
Sasser, W.E., Olsen, R.P., and Wychoff, D.D. by such journals as The Joural of Small Busi-
Management of Service Operations: Text ness Management, Industrial Marketing Man-
and Cases, Allyn and Bacon, Boston, MA, agement, Journal of Business Ethics, and
1978. Omega. Current research interests are in the
Shannon, C.E. and Weaver, W. The Mathemati- areas of services marketing, market orientation,
cal Theory of Communication, University of and international marketing.
Illinois Press, Urbana, IL, 1949.
Shostack, G.L. "Breaking Free from Product Richard T. Watson is an associate professor
Marketing," Joumal of Marketing (41:2), April and graduate coordinator in the Department of
1977, pp. 73-80. Management at the University of Georgia. He
Stephenson, W. The Study of Behavior: Q-tech- has a Ph.D. in MIS from the University of Minne-
nique and Its Methodology, University of Chi- sota. His publications include articles in MIS
cago Press, Chicago, IL, 1953. Quarterly, Communications of the ACM, Journal
Zeithaml, V., Parasuraman, A., and Berry, L.L. of Business Ethics, Omega, and Communica-
Delivering Quality Service: Balancing Cus- tion Research. His research focuses on national
tomer Perceptions and Expectations, Free culture and its impact on group support sys-
Press, New York, NY, 1990. tems, management of IS, and national informa-
Zeithaml, V.A., Berry, L.L., and Parasuraman, A. tion infrastructure. He has taught in more than
"The Nature and Determinants of Customer 10 countries.

Expectations of Service," Journal of the


C. Bruce Kavan is an assistant professor of
Academy of Marketing Science (21:1), Win-
MIS and interim director of the Barnett Institute
ter 1993, pp. 1-12.
at the University of North Florida. Prior to com-
pleting his doctorate at the University of Georgia
in 1991, he held several executive positions with
About the Authors Dun & Bradstreet including vice president of In-
formation Service for their Receivable Manage-
Leyland F. Pitt is a professor of management ment Services division. Dr. Kavan is an extremely
studies at Henley Management College and active consultant in the strategic use of technol-
Brunel University, Henley-on-Thames, UK, ogy, system architecture, and client/server tech-
where he teaches marketing. He holds an nologies. His main areas of research interest
MCom in marketing from Rhodes University, are in inter-organizational systems, the delivery
and M.B.A. and Ph.D. degrees in marketing of IT services, and technology adoption/diffu-
from the University of Pretoria. He is also an ad- sion. His work has appeared in such publica-
junct faculty member of the Bodo Graduate tions as Information Strategy: The Executive's
School, Norway, and the University of Oporto, Journal, Auerback's Handbook of IS Manage-
Portugal. He has taught marketing in graduate ment, and Journal of Services Marketing.

MIS Quarterly/June 1995 185

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

Appendix
Service Quality Expectations
Directions: This survey deals with your opinion of the Information Systems Department (IS). Based on
your experiences as a user, please think about the kind of IS unit that would deliver excellent quality of
service. Think about the kind of IS unit with which you would be pleased to do business. Please show
the extent to which you think such a unit would possess the feature described by each statement. If you
strongly agree that these units should possess a feature, circle 7. If you strongly disagree that these
units should possess a feature, circle 1. If your feeling is less strong, circle one of the numbers in the
middle. There are no right or wrong answers-all we are interested in is a number that truly reflects your
expectations about IS.
Please respond to ALL the statements
Strongly Strongly
disagree agree

E1 They will have up-to-date hardware and software 1 2 3 - 4 5 6 - 7


E2 Their physical facilities will be visually appealing 1 2 - 3 - 4 -- 5 6- 7
E3 Their employees will be well dressed and neat in appearance 1 2 - 3 -- 4 5 6 7
E4 The appearance of the physical facilities of these IS units will
be in keeping with the kind of services provided 1 - 2 3 - 4 - 5 6 7
E5 When these IS units promise to do something by a certain
time, they will do so 1 -2 3 - 4 - 5 6 - 7
E6 When users have a problem, these IS units will show a
sincere interest in solving it 1 -- 2 3 4 5 6 - 7
E7 These IS units will be dependable 1 2 3 4 -- 5 6 7
E8 They will provide their services at the times they promise
to do so 1 - 2 3 4 - 5 6 7
E9 They will insist on error-free records 1 2 -- 3 - 4 5 6 7
E10 They will tell users exactly when services will be performed 1 2 - 3 - 4 - 5 6 7
E11 Employees will give prompt service to users 1- 2 - 3 - 4 --5 - 6 - 7
E12 Employees will always be willing to help users 1 2 3 4 5 6 7
E13 Employees will never be too busy to respond to users' requests 1 2 -
E14 The behavior of employees will instill confidence in users 1 - 2 3 - 4 5 6 - 7
E15 Users will feel safe in their transactions with these IS units
employees 1 2 3 - 4 5 6 7
E16 Employees will be consistently courteous with users 1 2 -- 3 4 - 5 6 7
E17 Employees will have the knowledge to do their job well 1 2 3 4 5 6 7
E18 These IS units will give users individual attention 1 2 3 -- 4 - 5 6 7
E19 These IS units will have operating hours convenient to all
their users 1 2 3 4 5 - 6 7
E20 These IS units will have employ
attention 1 2 3 --4 5 6 7
E21 These IS units will have the users' best interests at heart 1 - 2 3 4 5 -6 7
E22 The employees of these IS units will understand the specific
needs of their users 1 2 3 - 4 5 6 7

186 MIS Quarterly/June 1995

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement

Service Quality Perceptions


Directions: The following set of statements relate to your feelings about ABC corporation's IS unit. For
each statement, please show the extent to which you believe ABC corporation's IS has the feature de-
scribed by the statement. Once again, circling a 7 means that you strongly agree that ABC corporation's
IS has that feature, and circling 1 means that you strongly disagree. You may circle any of the numbers
in the middle that show how strong your feelings are. There are no right or wrong answers-all we are
interested in is a number that best shows your perceptions about ABC corporation's IS unit.
Please respond to ALL the statements

Strongly Strongly
disagree agree

P1 IS has up-to-date hardware and software 1 2 - 3 4 5 6 7

P2 IS's physical facilities are visually appealing 1 -2 - 3 4 5 6 7

P3 IS's employees are well dressed and neat in appearance 1 -2 3 4 5 6 7

P4 The appearance of the physical facilities of IS is in keeping


with the kind of services provided 1 2 3 4 5 6 7

P5 When IS promises to do something by a certain time,


it does so 1 - 2 3 4 5 6 7

P6 When users have a problem, IS shows a sincere interest


in solving it 1 2 3 4 5 6 7

P7 IS is dependable 1 2 3 4 5 6 7

P8 IS provides its services at the times it promises to do so


1 2 3 4 5 6 7

P9 IS insists on error-free records 1 2 3 4 5 6-- 7

P10 IS tell users exactly when services will be performed


1-- 2 -3 -4 5 6 7

P11 IS employees give prompt service to users 1 --2 3 4 5 6 7

P12 IS employees are always willing to help users 1 - 2 3 4 5 6 7

P13 IS employees are never be too busy to respond to


users' requests 1 2 3 4 5 6- 7

P14 The behavior of IS employees instills confidence in1


users
2 -3 4 5 6 7

P15 Users will feel safe in their transactions with IS's employees
1 - 2 3 4 5 6 7

P16 IS employees are consistently courteous with users


1 --2-- 3 ---4 5 6 7

P17 IS employees have the knowledge to do their job welll


1 2 -3 4 5 6 7

P18 IS gives users individual attention 1 ---2 3 4 5 6 7

P19 IS has operating hours convenient to all its users 1


1---
----2
2 --- 3-3
--- 4 5 5
4 --- 66 7
7

P20 IS has employees who give users personal attention


1 2 3 - 4 5 6 7
P21 IS has the users' best interests at heart 1 2 3 - 4 5 6 7

P22 Employees of IS understand the specific needs of its users


1 -2 -3 -4 -5 -6 -7

Now please complete the following:

1. Overall, how would you rate the qualit


by circling one of the points on the scale be

Poor Excellent

1 --2 -3 -4 5 6 7

MIS Quarterly/

This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms

You might also like