Professional Documents
Culture Documents
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted
digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about
JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
that the key deliverables include operations categories are labeled as production and prod-
manual, usage statistics, enhancement re- uct, respectively (Mason, 1978). Since system
quests, and bug-fix requests. The notion quality and information quality precede other
that IS departments are service providers is measures of IS success, existing measures
not well-established in the IS literature. seem strongly product focused. This is not sur-
A recent review identifies six measures of IS prising given that many of the studies providing
success (Delone and McLean, 1992). In recog-the empirical basis for the categorization are
based on data collected in the early 1980s,
nition of the expanded role of the IS department,
when IS was still in the mainframe era.
we have augmented this list to include service
quality as a measure of IS success. This paper As the introduction pointed out, the IS depart-
discusses the appropriateness of SERVQUAL ment is not just a provider of products. It is also
to assess IS service quality. The instrument was a service provider. Indeed, this may be its major
originally developed by marketing academics to function. This recognition has been apparent for
assess service quality in general. some years: The quality of the IS department's
service, as perceived by its users, is a key indi-
cator of IS success (Moad, 1989; Rockart,
1982). The principle reason IS departments
measure user satisfaction is to improve the
Measuring IS Effectiveness quality of service they provide (Conrath and
Mignen, 1990). A significantly broader perspec-
Organizations typically have many stakeholders
tive claims that quality and service are key
with multiple and conflicting objectives of vary-
measures of white-collar productivity (Deming,
ing time horizons (Cameron and Whetton, 1983;
1981-82). For Deming, quality is a super ordi-
Hannan and Freeman, 1977; Quinn and
nate goal.
Rohrbaugh, 1983). There is rarely a single,
common objective for all stakeholders. Thus, Product and system suppliers are in the service
measuring the success of most organizational business, albeit less so than service firms. Virtu-
endeavors is problematic. IS is not immune to ally all tangible products have intangible attrib-
this problem. IS effectiveness is a multidimen- utes, and all services possess some properties
sional construct, and there is no single, over- of tangibility. There is a perspective in the mar-
arching measure of IS success (DeLone and keting literature (Berry and Parasuraman, 1991,
McLean, 1992). Consequently, multiple meas- p.9; Shostack, 1977) that argues the inappropri-
ures are required, and DeLone and McLean ateness of a rigid goods-services classification.
identify six categories into which these meas- Shostack (1977) asserts that when a customer
ures can be grouped, namely system quality, in- buys a tangible product (e.g., a car) he or she
formation quality, use, user satisfaction, also buys a service (e.g., transportation). In
individual impact, and organizational impact. many cases, a product is only a means of ac-
The categories are linked to define an IS suc- cessing a service. Personal computer users do
cess model (see Figure 1). not just want a machine; rather they seek a sys-
tem that satisfies their personal computing
The basis of the DeLone and McLean categori-
needs. Thus, the IS department's ability to sup-
zation-Shannon and Weaver's (1949) theory
ply installation assistance, product knowledge,
of communication-is product oriented. For ex-
software training and support, and online help is
ample, system quality describes measures of
a factor that will have an impact on the relation-
the information processing system. Most meas-
ship between IS and users.
ures in this category tap engineering-oriented
performance characteristics (DeLone and Goods and services are not neatly compartmen-
McLean, 1992). Information quality represents talized. They exist along a spectrum of tangibil-
measures of information systems output. Typi- ity, ranging from relatively pure products, to
cal measures in this area include accuracy, pre- relatively pure services, with a hybrid some-
cision, currency, timeliness, and reliability of where near the center point. Current IS success
information provided. In an earlier study, also measures, product and system quality, focus on
based on the theory of communication, these the tangible end of the spectrum. We argue that
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
Figure 1. Augmented IS Success Model (Adapted from DeLone and McLean, 1992)
service quality, the other end of the spectrum, information systems, the quality of service can
needs to be considered as an additional meas- influence use and user satisfaction.
ure of IS success. DeLone and McLean's model
In summary, multiple instruments are required
needs to be augmented to reflect the IS depart-
to assess IS effectiveness. The current selection
ment's service role. As the revised model shows
ignores service quality, an increasingly impor-
(see Figure 1), service quality affects both use
tant facet of the IS function. If IS researchers
and user satisfaction.
disregard service quality, they may gain an inac-
There are two possible units of analysis for IS curate reading of overall IS effectiveness. We
service quality-the IS department and a par- propose that service quality should be included
ticular information system. In those cases where in the researcher's armory of measures of IS
users predominantly interact with one system effectiveness.
(e.g., sales personnel taking telephone orders),
a user's impression of service quality is based
almost exclusively on one system. In this case,
the unit of analysis is the information system. In
situations where users have multiple interac- Measuring Service Quality
tions with the IS department (e.g., a personnel Service quality is the most researched area of
manager using a human resources manage- services marketing (Fisk, et al., 1993). The con-
ment system, electronic mail, and a variety of cept was investigated in an extensive series of
personal computer products), the unit of analy- focus group interviews conducted by Paras-
sis can be a particular system or the IS depart- uraman, et al. (1985). They conclude that serv-
ment. For the user of multiple systems, ice quality is founded on a comparison between
however, this distinction may be irrelevant. what the customer feels should be offered and
Thus, for example, the user who finds it difficult what is provided. Other marketing researchers
to get a personal computer repaired is con- (Gronroos, 1982; Sasser, et al., 1978) also sup-
cerned not with poor service for a designated port the notion that service quality is the discrep-
system but with poor service in general from the ancy between customers' perceptions and expec-
IS department. Thus, while system quality and tations. There is support for this argument in the
information quality may be closely associated IS literature. Conrath and Mignen (1990) report
with a particular software product, this is not al- that the second most important component of
ways the case with service quality. Irrespective user satisfaction, after general quality of service,
of whether a user interacts with one or multiple is the match between users' expectations and
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
User IS department
I I
Vendors I I
IS
Vendor Expected
communications
communications service
t
Gap |
I
Perceived
service
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
and must recognize that they are an ever pre- The final part is a single question to assess
sent force shaping expectations. This is not nec- overall service quality. Underlying the 22 items
essarily bad. Vendors' communications can be are five dimensions that the authors claim are
a positive force for change when they make us- used by customers when evaluating service
ers aware of what they should expect from IS. quality, regardless of the type of service. These
dimensions are:
The forces influencing users' expectations are
shown in Figure 2. The difference between ex- * Tangibles: Physical facilities, equipment,
pected service and IS's perceived service is de- and appearance of personnel.
picted as a gap-the discrepancy between what * Reliability: Ability to perform the promised ser-
users expect and what they think they are get- vice dependably and accurately.
ting. If IS is to address this disparity it needs * Responsiveness: Willingness to help customers
some way of assessing users' expectations and and provide prompt service.
perceptions, and measuring the gap.
* Assurance: Knowledge and courtesy of
employees and their ability to
Parasuraman, et al. (1988) operationalized their
inspire trust and confidence.
conceptual model of service quality by following
the framework of Churchill (1979) for developing * Empathy: Caring, individualized attention
the service provider gives its
measures of marketing constructs. They started
customers.
by making extensive use of focus groups, who
identified 10 potentially overlapping dimensions
Service quality for each dimension is captured
of service quality. These dimensions were used
by a difference score G (representing perceived
to generate 97 items. Each item was then quality for that item), where
turned into two statements-one to measure
G=P-E
expectations and one to measure perceptions.
A sample of service users was asked to rate and P and E are the average ratings of
each item on a seven-point scale anchored on sion's corresponding perception and ex
strongly disagree (1) and strongly agree (7). tion statements respectively.
This initial questionnaire was used to assess the
service-quality perceptions of customers who
Because service quality is a significant
had recently used the services of five different
marketing, SERVQUAL has been subject to
types of service organizations (essentially fol-
considerable debate (e.g., Brown, et al., 1993;
lowing Lovelock's (1983) classification). The 97-
item instrument was then purified and
Parasuraman, et al., 1993) regarding its dimen-
sionality and the wording of items (Fisk, et al.,
condensed by first focusing on the questions
1993). Nevertheless, after examining seven
clearly discriminating between respondents hav-
studies, Fisk, et al. conclude that researchers
ing different perceptions, and second, by focus-
generally agree that the instrument is a good
ing on the dimensionality of the scale and
predictor of overall service quality.
establishing the reliability of its components.
Parasuraman, et al. (1991) argue that SERVQUAL's
Parasuraman, et al.'s work resulted in a 45-item items measure the core criteria of service qual-
instrument, SERVQUAL, for assessing cus- ity. They assert that it transcends specific func-
tomer expectations and perceptions of service tions, companies, and industries. They suggest
quality in service and retailing organizations. that context-specific items may be used to supple-
The instrument has three parts (see the Appen- ment the measurement of the core criteria. In
dix). The first part consists of 22 questions for this case, we found some slight rewording of
measuring expectations. Questions are framed one item was required to measure IS service
in terms of the performance of an excellent quality. The first question was originally asked in
provider of the service being studied. The sec- terms of "up-to-date equipment." We changed
ond part consists of 22 questions for measuring the wording to "up-to-date hardware and software"
perceptions. Questions are framed in terms of because equipment could be perceived as refer-
the performance of the actual service provider. ring only to hardware.
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
excluding some meaning of service quality in concepts. Thus, in the ideal case, exact repro-
the IS domain. duction of the five-factor model would indicate
both nomological and discriminant validity.
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
* The cutoff po
of items in the
Table 4. E
(Principal
Dimensions F1 F2 F3 F4 F5
Tangibles G1 0.78
G2 0.83
G3 0.57
G4 0.70
Reliability G5 0.85
G6 0.57
G7 0.76
G8 0.80
G9
Responsiveness G10 0.69
G11 0.60
G12 0.61
G13
Assurance G14 0.67
G15 0.55 0.63
G16 0.82
G17 0.56
Empathy G18
G19 0.77
G20 0.55
G21 0.65
G22 0.55
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
(see Table 5). The factor loadings are reason- In summary, SERVQUAL passes content, reli-
ably consistent with the suggested model. Reli- ability and convergent validity examination.
ability and assurance load as expected. There are some uncertainties with nomological
Tangible, responsiveness, and empathy miss by
and discriminant validity, but not enough to dis-
one item in each case. continue consideration of SERVQUAL. It is a
suitable measure of IS service quality.
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
and empathy. These concepts are not semanti- provides valuable information for redirecting IS
cally distant, and there appear to be situations staff toward providing higher-quality service to
where users perceive them very similarly. users. Because SERVQUAL is a general meas-
ure of service quality, it is well-suited to bench-
marking (Camp, 1989), which establishes goals
based on best practice. Thus, IS managers can
potentially use SERVQUAL to benchmark their
IS Research and Service
service performance against service industry
Quality leaders such as airlines, banks, and hotels.
Service quality can be both an independent and
dependent variable of IS research. The aug-
mented IS success model (see Figure 1) indi-
cates that IS service quality is an antecedent of Future Research
use and user satisfaction. Because IS now has
We see three possible directions for future re-
an important service component, IS researchers
search on service quality. First, Q-method could
may be interested in identifying which manage-
be used to examine service quality from a differ-
rial actions raise service level. In this situation,
ent perspective. Second, the relationship be-
service quality becomes a dependent variable.
tween customer service life cycle and service
Furthermore, service quality may be part of a
quality could be examined. Third, the relative
causal model. Consider a study where a service
importance of the determinants of expected
policy intervention (e.g., a departmental help
service could be investigated.
desk) is designed to increase personal com-
puter use. Figure 1 suggests that service quality
would be a mediating variable in this situation.
Q-method
SERVQUAL, as with most questionnaires of this
kind, does not require respondents to express a
preference for one service characteristic over
IS Practice and Service another (e.g., reliability over assurance). They
Quality can rate items comprising both dimensions at
the high end of the scale. However, organiza-
Any instrument advocated as a measure of IS
tions frequently have to make a choice when al-
success should first be validated before applica-
locating scarce resources. For example,
tion. This study provides evidence that practi-
managers need to know whether they should
tioners can, with considerable confidence, use
SERVQUAL as a measure of IS success. Our put more effort into reliability or empathy. This
issue can be addressed by asking respondents
research subsequent to this study (e.g., Pitt and
to allocate relative dimension importance on a
Watson, 1994) has focused on the application of
constant sum scale (e.g., 100 points). Zeithaml,
SERVQUAL as a diagnostic tool. We have com-
et al. (1990) recommend this approach, which
pleted longitudinal studies in two organizations.
results in a "weighted gap score" by dimension
In each case, service quality was initially meas-
and in total. Another approach to identifying
ured, then IS management took steps to im-
preferences is Q-method (Brown, 1980;
prove service, and service quality was
Stephenson, 1953), which can identify a prefer-
remeasured 12 months later. In both cases, sig-
ence structure and indicate patterning within
nificant improvements in service quality were
groups of users.
detected. These organizations have embarked
on a program of continuous incremental im- We intend to use Q-method to gain insights into
provement of IS service quality and use users' preference structure for the dimensions
SERVQUAL to measure their progress. Our ex- of service and to discover if there is a single uni-
perience shows that IS practitioners accept form profile of service expectations, or there are
service quality as a useful, valid measure of IS classes of users with different expectations. We
effectiveness, and they find that SERVQUAL propose to use the questions from SERVQUAL
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
and recast them for Q-method. Subjects will be tems, an IS department performs other functions
such as responding to questions about software
asked to sort the items twice: first, on the basis
of an ideal service provider-their expectations; products, providing training, and giving equip-
second, on the basis of the actual service ment advice. In each of these cases, the user's
provider-their perceptions. This is very similargoal is to acquire information. Thus, the IS de-
to a technique described by Stephenson (1953), partment delivers information through both
the father of Q-method, in his study of studenthighly structured information systems and cus-
achievement and motivation. tomized personal interactions. Clearly, providing
information is a fundamental service of an IS de-
partment, and it should be concerned with the
Customer service life cycle quality of service it delivers. Thus, we believe,
the effectiveness of an IS unit can be partially
The customer service life cycle, a variation on
assessed by its capability to provide quality
the customer resource life cycle (Ives and Lear-
service to its users.
month, 1984; Ives and Mason, 1990), breaks
down the service relationship with a customer The major contribution of this study is to demon-
into four major phases: requirements, acquisi- strate that SERVQUAL, an extensively applied
tions, stewardship, and retirement. It is highly marketing instrument for measuring service
likely users' expectations differ among these quality, is applicable in the IS arena. The pa-
phases. Empathy might be the major need dur- per's other contributions are highlighting the
ing requirements and reliability during steward- service component of the IS department, aug-
ship. Thus, examining service quality by the menting the IS success model, presenting a
customer service life cycle phase is an opportu- logical model for user's expectations, and giving
nity for future research. some directions for future research.
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
Cameron, K.S. and Whetton, D.A. Organiza- Johnson, R.A. and Wichern, D.W. Applied Multi-
tional Effectiveness: A Comparison of Multi- variate Statistical Analysis, Prentice-Hall,
ple Models, Academic Press, New York, NY, Englewood Cliffs, NJ, 1982.
1983. Laudon, K.C. and Laudon, J.P. Management In-
Camp, R.C. Benchmarking: The Search for In- formation Systems: A Contemporary Per-
dustry Best Practices that Lead to Superior spective (2nd ed.), Macmillan, New York,
Performance, Quality Press, Milwaukee, WI, NY, 1991.
1989. Lovelock, C.H. "Classifying Services to Gain
Carman, J.M. "Consumer Perceptions of Serv- Strategic Marketing Insights," Joumal of Mar-
ice Quality: An Assessment of SERVQUAL keting (47), Summer 1983, pp. 9-20.
Dimensions," Journal of Retailing (66:1), Mason, R.O. "Measuring Information Output: A
Spring 1990, pp. 33-53. Communication Systems Approach," Infor-
Churchill, G.A. "A Paradigm for Developing Bet- mation & Management (1:5), 1978, pp. 219-
ter Measures of Marketing Constructs," Jour- 234.
nal of Marketing Research (16), February Moad, J. "Asking Users to Judge IS," Datama-
1979, pp. 64-73 tion (35:21), November 1, 1989, pp. 93-100.
Conrath, D.W. and Mignen, O.P. "What is Being Nunnally, J.C. Psychometric Theory (2nd ed.),
Done to Measure User Satisfaction with McGraw-Hill, New York, NY, 1978.
EDP/MIS," Information & Management Parasuraman, A., Zeithaml, V.A., and Berry, L.L.
(19:1), August 1990, pp. 7-19. "A Conceptual Model of Service Quality and
Cronbach, L.J. "Coefficient Alpha and the Inter- Its Implications for Future Research," Journal
nal Structure of Tests," Psychometrika of Marketing (49), Fall 1985, pp. 41-50.
(16:3), September 1951, pp. 297-333. Parasuraman, A., Zeithaml, V.A., and Berry, L.L.
DeLone, W.H. and McLean, E.R. "Information "SERVQUAL: A Multiple-item Scale for
Systems Success: The Quest for the De- Measuring Consumer Perceptions of Service
pendent Variable," Information Systems Re- Quality," Journal of Retailing (64:1), Spring
search (3:1), March 1992, pp. 60-95. 1988, pp. 12-40.
Deming, W.E. "Improvement of Quality and Pro- Parasuraman, A., Berry, L.L., and Zeithaml, V.A.
ductivity Through Action by Management," "Refinement and Reassessment of the
National Productivity Review, Winter 1981- SERVQUAL Scale," Journal of Retailing
82, pp. 12-22. (67:4), Winter 1991, pp. 420-450.
Fisk, R.P., Brown, S.W., and Bitner, M.J. Parasuraman, A., Berry, L.L., and Zeithaml, V.A.
"Tracking the Evolution of the Services Mar- "More on Improving the Measurement of
keting Literature," Joural of Retailing (69:1), Service Quality," Journal of Retailing (69:1),
Spring 1993, pp. 61-103. Spring 1993, pp. 140-147.
Gronroos, C. Strategic Management and Mar- Pitt, L.F. and Watson, R.T. Longitudinal Meas-
keting in the Service Sector, Swedish School urement of Service Quality in Information
of Economics and Business Administration, Systems: A Case Study, Proceedings of
Helsingfors, Finland, 1982. the Fifteenth International Conference on In-
Hannan, M.T. and Freeman, J. "Obstacles to formation Systems, Vancouver, B.C., 1994.
Comparative Studies," In New Perspectives Quinn, R.E. and Rohrbaugh, J. "A Spatial Model
on Organizational Effectiveness, P.S. Good- of Effectiveness Criteria: Towards a Compet-
man and J.M. Pennings (eds.), Jossey-Bass, ing Values Approach to Organizational
San Francisco, CA, 1977, pp. 106-131. Analysis," Management Science (29:3),
Ives, B. and Learmonth, G.P. "The Information March 1983, pp. 363-377.
Systems as a Competitive Weapon," Com- Rockart, J.F. "The Changing Role of the Infor-
munications of the ACM (27:12), December mation Systems Executive: A Critical Suc-
1984, pp. 1193-1201. cess Factors Perspective," Sloan Manage-
Ives, B. and Mason, R. "Can Information Tech- ment Review (24:1), Fall 1982, pp. 3-13.
nology Revitalize Your Customer Service?" Roget, P.M. Roget's International Thesaurus,
Academy of Management Executive (4:4), revised by Chapman, R.L. (4th ed.), Thomas
November 1990, pp. 52-69. Y. Crowell, New York 1977.
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
Rushinek, A. and Rushinek, S.F. "What Makes and executive programs in the U.S., Australia,
Users Happy?" Communications of the ACM Singapore, South Africa, France, Malta, and
(29:7), July 1986, pp. 594-598. Greece. His publications have been accepted
Sasser, W.E., Olsen, R.P., and Wychoff, D.D. by such journals as The Joural of Small Busi-
Management of Service Operations: Text ness Management, Industrial Marketing Man-
and Cases, Allyn and Bacon, Boston, MA, agement, Journal of Business Ethics, and
1978. Omega. Current research interests are in the
Shannon, C.E. and Weaver, W. The Mathemati- areas of services marketing, market orientation,
cal Theory of Communication, University of and international marketing.
Illinois Press, Urbana, IL, 1949.
Shostack, G.L. "Breaking Free from Product Richard T. Watson is an associate professor
Marketing," Joumal of Marketing (41:2), April and graduate coordinator in the Department of
1977, pp. 73-80. Management at the University of Georgia. He
Stephenson, W. The Study of Behavior: Q-tech- has a Ph.D. in MIS from the University of Minne-
nique and Its Methodology, University of Chi- sota. His publications include articles in MIS
cago Press, Chicago, IL, 1953. Quarterly, Communications of the ACM, Journal
Zeithaml, V., Parasuraman, A., and Berry, L.L. of Business Ethics, Omega, and Communica-
Delivering Quality Service: Balancing Cus- tion Research. His research focuses on national
tomer Perceptions and Expectations, Free culture and its impact on group support sys-
Press, New York, NY, 1990. tems, management of IS, and national informa-
Zeithaml, V.A., Berry, L.L., and Parasuraman, A. tion infrastructure. He has taught in more than
"The Nature and Determinants of Customer 10 countries.
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
Appendix
Service Quality Expectations
Directions: This survey deals with your opinion of the Information Systems Department (IS). Based on
your experiences as a user, please think about the kind of IS unit that would deliver excellent quality of
service. Think about the kind of IS unit with which you would be pleased to do business. Please show
the extent to which you think such a unit would possess the feature described by each statement. If you
strongly agree that these units should possess a feature, circle 7. If you strongly disagree that these
units should possess a feature, circle 1. If your feeling is less strong, circle one of the numbers in the
middle. There are no right or wrong answers-all we are interested in is a number that truly reflects your
expectations about IS.
Please respond to ALL the statements
Strongly Strongly
disagree agree
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms
IS Service Quality-Measurement
Strongly Strongly
disagree agree
P7 IS is dependable 1 2 3 4 5 6 7
P15 Users will feel safe in their transactions with IS's employees
1 - 2 3 4 5 6 7
Poor Excellent
1 --2 -3 -4 5 6 7
MIS Quarterly/
This content downloaded from 202.43.95.117 on Mon, 12 Dec 2016 11:33:42 UTC
All use subject to http://about.jstor.org/terms