You are on page 1of 17

End-User Satisfaction

The Measurement of End-User Computing Satisfaction

By: William J. Doll Professor of MIS and Strategic Management The University of Toledo Gholamreza Torkzadeh Assistant Professor of Information Systems and Management Science The University of Toledo 2801 West Bancroft Street Toledo, Ohio 43606

End-user computing (EUC) is one of the most significant phenomenon to occur in the information systems industry in the last ten years (Benson, 1983; Lefkovits, 1979), Although still in its early stages, signs of rapid growth are evident. In the companies they studied, Rockart and Flannery (1983) found annual EUC growth rates of 50 percent to 90 percent, Benjamin (1982) has predicted that by 1990 EUC will absorb as much as 75 percent of the corporate computer budget. Because of these trends, Rockart and Flannery call for better management to improve the success of end-user computing. Without improved management, they see the adverse effects of the Nolan-Gibson (1974) 'control" stage constraining development of this new phenomenon.

This articie contrasts traditional versus enduser computing environments and reports on the development of an instrument which merges ease of use and information product items to measure the satisfaction of users who directly interact with the computer for a specific application. Using a survey of 618 end users, the researchers conducted a factor analysis and modified the instrument. The results suggest a 12-item instrument that measures five components of end-user satisfaction content, accuracy, format, ease of use. and timeiiness. Evidence of the instrument's discriminant validity is presented. Reliability and vaiidity is assessed by nature and type of application. Finaily, standards for evaluating end-user applications are presented, and the instrument's usefulness for achieving more precision in research questions is explored. Keywords: End-user computing, user satisfaction, end-user computing satisfaction, management ACM Categories: K,6.4, K.6.0

To improve the management of EUC. Cheney, et al, (1986) call for more empirical research on the factors which influence the success of enduser computing, Henderson and Treacy (1986) describe a sequence of perspectives (implementation, marketing, operations, and economic) for managing end-user computing and identify objectives for each phase. In the implementation phase, they maintain that objectives should focus on increased usage and user satisfaction. As the organization gains experience with end-user computing, they recommend increased emphasis on market penetration and objectives that are more difficult to evaluate such as integration, efficiency, and competitive advantage.

Ideally one would like to evaluate EUC based on its degree of use in decision making and the resultant productivity and/or competitive advantages, Crandall (1969) describes these resultant benefits as utility in decision making. However, this decision analysis" approach is generally not feasible (Gallagher, 1974; Nolan and Seward, 1974). End-user computing satisfaction (EUCS) is a potentially measurable surrogate for utility in decision making. An end-user applications utility in decision making is enhanced when the outputs meet the user's information requirements (described by Bailey and Pearson (1983) as 'information product") and the application is easy to use. Ease of use or "user friendliness" is especially important in facilitating voluntary mana* gehal use of inquiry or decision support systems.

MIS QuarteriyiJune 1988 259

End-User Satisfaction

In a voluntary situation, system usage can also be a surrogate measure of system success. Ives, et al. (1983) argue that usage of an information or decision support system is often not voluntary (e,g., when usage is mandated by management). In this involuntary situation, perceptual measures of satisfaction may be more appropriate. Also, both theory (Fishbein and Ajzen, 1975) and a recent path analysis (Baroudi. ot al.. 1986) suggest that satisfaction leads to usage rather than usage stimulating satisfaction. Thus, user satisfaction may be the critical factor. The growth of end-user computing is presenting new challenges for information system managers. Measures of user information satisfaction developed for a traditional data processing environment may no longer be appropriate for an end-user environment where users directiy interact with application software. Indeed, user information satisfaction instruments have not been designed or validated for measuring end-user satisfaction. They focus on general satisfaction rather than on a specific application, and they omit aspects important to end-user computing such as ease of use. Hence, this study distinguishes between user information satisfaction and an end user's satisfaction with a specific application. This article reports on the development of an instrument designed to measure the satisfaction of users who directly interact with a specific application. The fccus is on measuring EUCS among data processing (DP) amateurs and nonDP trained users rather than DP professionals. The explicit goals of this research were to develop an instrument that: 1, Focuses on satisfaction with the information product provided by a specific application; 2, Includes items to evaluate the ease of use of a specific application; 3, Provides Likert-type scales as an alternative to semantic differential scaling; 4, Is short, easy to use, and appropriate for both academic research and practice; 5, Can be used with confidence across a variety of applications (i.e.. adequate reliability and validity); and 6, Enables researchers to explore the relationships between end-user computing satisfaction and plausible independent variables (ie..

user computing skills, user involvement, EDP support policies and priorities, etc.). An additional goal was to identify underlying factors or components of end-user computing satisfaction.

The End-User Computing Satisfaction Construct

In a traditional data processing environment (see Figure 1), users interact with the computer indirectly, through an analyst/programmer or through operations. Routine reports might be requested from operations. For ad hoc or nonroutine requests, an analyst / programmer assists the user. In this environment, a user might be unaware of what specific programs are run to produce reports. In an end-user computing environment (see Figure 2), decision makers interact directly with the application software tc enter information or prepare output reports. Decision support and database applications characterize this emerging end-user phenomenon. The environment typically includes a database, a mode! base, and an interactive software system that enables the user to directly interact with the computer system (Sprague, 1980). Although vast improvements have been made in end-user software (Canning, 1981; Martin, 1982), efforts to improve fhe manmachine interface continue (Sondheimer and Relies, 1982; Yavelberg, 1982), Figures 1 and 2 do not depict all the differences between traditional and end-user computing environments. Other differences such as software, hardware, support requirements, and control procedures are not illustrated. Rather, the intent of these figures is to illustrate that, in an end-user computing environment, analysts/programmers and operations staff are less directly involved in user support; users assume more responsibility for their own applications. Systems personnel might assist in fhe selection of appropriate software tools, but the end users are largely on their own to design, implement, modify, and run their own applications. Training programs, experienced colleagues, and manuals provide some assistance. However, the goal of information system staff and service policies typically focuses on enabling end users to function more independently, to solve many problems on their own.


MIS Quarterly/June 1988

End-User Satisfaction

Ad hoc Request






Figure 1. The Traditional DP Environment

The definition of end-user computing

Davis and Olson (1985) describe this changing role of the user. To define end-user computing, they distinguish between primary and secondary user roles. The primary user makes decisions based on the system's output. The secondary user is responsible for interacting with the application software to enter information or prepare output reports, but does not use the output directly in his or her job. In end-user computing, the two roles are combined: the person who utilizes the system output also develops it. In contrast, the CODASYL end-user facilities committee (Lefkovits, 1979) provides a broader definition of end-user computing to include: 'indirect" end users who use computers through other people; "intermediate" end users who specify business information requirements for reports they ultimately receive; and "direct" end users who actually use terminals. However, for the most part, writers in this area such as Martin

(1982), McLean (1979), and Rockart and Flannery (1983) limit their definition of end users to individuals who interact directly with the computer. This research uses the more limited definition. End-user computing satisfaction is conceptualized as the affective attitude towards a specific computer application by someone who interacts with the application directly. End-user satisfaction can be evaluated in terms of both the primary and secondary user roles. User information satisfaction, especially the information product, focuses on the primary role and is independent of the source of the information (i.e.. the application). Secondary user satisfaction varies by application; it depends on an application's ease of use. Despite the growing handson use of inquiry and decision support applications by managerial, professional, and operating personnel, research on user information satisfaction instruments has emphasized the primary user role, measuring overall user information satisfaction.

MIS Quarterly/June 1988 261

End-User Satistaction





Figure 2. The End-User Computing Environment

The Ives, Oison and Baroudi instrument

Focusing on 'indirect" or "intermediate" users, Bailey and Pearson (1983) interviewed 32 middle managers and developed a semantic differential instrument measuring overall computer user satisfaction. Later, Ives. et al., (1983) surveyed production managers (e.g., 'indirect" or "intermediate" users), conducted a factor analysis of the Bailey and Pearson instrument, and reported on a shorter version of this instrument. After two factors identified as "information product" were combined and a vendor support factor eliminated, the Ives, et a!., study suggested three factors; EDP staff and services: information product; and knowledge or involvement. However, the ratio of sample size to number of scales (7;1) must be regarded with some caution. Other validation studies have expressed some concerns. Using a sample of "indirect" and "intermediate" users, Treacy (1985) assessed the reliability and validity of the Ives, et al., instrument. He concludes that this instrument is an

important contribution, but has difficulties in three areas: the variables found through exploratory factor analysis were labeled in imprecise and ambiguous terms; many of the questions used were poor operationalizations of their theoretical variables; and the instrument failed to achieve discriminant validity. Also, Galletta and Lederer (1986) found test-retest reliability problems with the Ives, et a!., instrument and, because of the heterogeneity of the items (information product, EDP staff and services, user involvement), expressed the need for caution in interpreting results. These concerns are not widely shared. The tves, et al., instrument is frequently used (Barki and Huff. 1985; Mahmood and Becker, 1985-86; Raymond, 1985; Galletta, 1986) and is, to date, probably the best available measure of user information satisfaction (Galletta and Lederer. 1986). However, this instrument has not been used in end-user computing research. The Ives, et al., instrument was designed for the more traditional data processing environment. It


MIS QuarterlyIJune 1988

End-User Satisfaction

measures general user satisfaction with EDP staff and services, information product, and user involvement/knowledge rather than satisfaction with a specific application. Indeed, it has not been validated for use in assessing specific enduser applications. It also ignores important ease of use aspects of the man-machine interface. Ease of use has become increasingly important in software design (Branscomb and Thomas, 1984). There is increasing evidence that the effective functioning of an application depends on its ease of use or usability (Goodwin, 1987). If end users find an application easy to use, they may become more advanced users, and therefore, better able to take advantage of the range of capabilities the software has to offer. Also, ease of use may improve productivity or enable decision makers to examine more alternatives. Both the EDP staff and services items and the user involvement/knowledge items seem inappropriate for an end-user environment. The enduser environment requires new EDP staff and service policies. End users have less direct interaction with analysts/programmers or operations. Rather than emphasizing direct support for user information requests, EDP staff and service policies emphasize more indirect and behind the scene technical efforts to improve hardware, languages, data management, privacy, security, and restart/recovery (Rockart and Flannery, 1983). Most end users would not be able to evaluate these activities. Thus, several EDP staff and service items in the Ives, et al., instrument seem less appropriate in an end-user environment. These items include; Relationship with EDP staff; Processing of requests for system changes; Attitude of EDP staff; Communication with EDP staff; Time required for system development; and Personal control of EDP services

tively correlated with satisfaction. Also, Rockart and Flannery (1983) suggest that end-user skill levels and EDP support policies can affect the success of end-user computing. For these reasons. EDP staff/services and user knowledge involvement items were excluded when the researchers generated items to measure end user computing satisfaction.

Research Methods
To ensure that a comprehensive list of Items was included, the works of previous researchers (Bailey and Pearson, 1983; Debons. et al., 1978; Neuman and Segev, 1980; Nolan and Seward, 1974; Swanson, 1974; Gallagher, 1974) were reviewed. Based on this review, the researchers generated 31 items to measure end-user perceptions. To measure "ease of use " of an application, a construct which seemed to be missing from the previous works reviewed, seven additional items were also included. Two global measures of perceived overall satisfaction and success were added to serve as a criterion. Thus, a 40-item instrument (see the Appendix) was developed using a five point Likert-type scale, where 1 = almost never; 2 = some of the time; 3 = about half of the time; 4 - most of the time; and 5 = almost always. The instructions requested the users to write in the name of their specific application and, for each question, to circle the response which best described their satisfaction with this application. Next, a structured interview questionnaire was developed where users were asked openended questions such as: How satisfied were they with the application. What aspects of the application, if any, were they most satisfied with and why. What aspects of the application, if any, were they most dissatisfied with and why?

By their nature, these items assume a more traditional computing environment and. like the user knowledge / involvement and information product items, are not application specific. In addition, EDP staff/services and user knowledge / involvement Items seemed more appropriately viewed as independent rather than dependent variables in an end-user computing environment. End-user knowledge and involvement in development is generally considered to be posi-

Pilot Study
To make the results more generalizable, the researchers attempted to gather data from a variety of firms. Five firms a manufacturing firm, two hospitals, a city government office, and a university were selected. A sample of 96 end users, with approximately an equal number of responses from each organization, was obtained. Data were gathered by research assistants through personal Interviews with end users.

MIS Quarterly/June 1988 263

End-User Satisfaction

The personal interviews enabled the assistants to verify that the respondent directly interacted with the application software. The research assistants first conducted open-ended structured interviews and recorded the end user's comments: then, the Likert-type questionnaire was administered. To assess whether the instrument was capturing the phenomenon desired by the researchers and to verify that important aspects of satisfaction were not omitted, qualitative comments from the structured interviews were compared with the responses to the 40 questions. The end users' overall level of satisfaction and the specific aspects that satisfied or dissatisfied end-users supported the instrument. This also enabled the researchers to verify that the respondents knew what the items were asking. To ensure that the items measured the enduser computing construct, the construct validity of each item was examined, Kerlinger (1978) cites two methods of construct validation: (1) correlations between total scores and item scores, and (2) factor analysis. The first approach assumes that the total score is valid; thus, the extent to which the item correlates with the total score is indicative of construct validity for the item. In this study each item score was subtracted from the total score in order to avoid a spurious part-whole correlation (Cohen and Cohen. 1975); the result is a corrected item total (sum for 37 items) which was then correlated with the item score. In this pilot test, factor analysis was not used to assess construct validity because the ratio of sample size to number of items (2:1) was considered too low. A measure of criterion-related validity (Kerlinger, 1978) was also examined to identify items which were not closely related to the end-user computing construct. The two global items measuring perceived overall satisfaction and success of the application were assumed to be valid measures, and the sum of the two items was used as a criterion scale. The items comprising this criterion scale were: "Is the system successful?" and "Are you satisfied with the system?" The extent to which each item was correlated with this two-item criterion scale provided a measure of criterion-related vaiidity. Items were eliminated if their correlation with the corrected item total was below .5 or if their correlation with the two-item criterion scale was below .4. These cutoffs were arbitrary; there are

no accepted standards. The correlations with the corrected item total {r ^ .5) and the two item criterion (r s ? .4) were significant at p < .001 and comparable to those used by ofher researchers (Ives, et al., 1983). Thus, the cutoffs were considered high enough to ensure that the items retained were adequate measures of the enduser computing satisfaction construct. These two criteria enabled the researchers to reduce the 38 items to 23. Five additional items were deleted because they represented the same aspects with only slightly different wordings (e.g., "Does the system provide up-to-date information?" and "Do you find the information up-todate?"). In each case, the wording with the lowest corrected item total correlation was deleted. In the pilot study, the remaining 18 items had a reliability (Cronbach's alpha) of .94 and a correlation of .81 with the two-item criterion scale.

Survey methods
To further explore this 18-item instrument, the questionnaire was administered to 44 firms. The sample was select rather than random; however, the large number of firms used supports the generalizability of the findings. In each of these firms, the MIS director was asked to identify the major applications and the major users who directly interact with each application. In many cases. the MIS director consulted with the heads of user departments to identify major end users. This method may have failed to identify a few major end users, especially microcomputer users. However, working through the MIS director was considered a practical necessity. In this survey, a separate criterion question ("Overall, how would you rate your satisfaction with this application?") was used. The criterion question used a five point scale: 1 ^ nonexistent; 2 ^ poor; 3 ^ fair; 4 ^ good; 5 = excellent. Data were gathered by research assistants who first conducted personal interviews with the end users (using the same structured interview process used in the pilot study) and then administered the questionnaire. Again, the personal interviews enabled the research assistants to verify that the respondents directly interacted with application software. The researchers compared the more qualitative interview comments with the questionnaire data to identify inconsistencies (i.e., respondents who did not complete


MIS QuarteriyiJune 1988

End-User Satisfactior}

the questionnaire carefully). Only about eight respondents were discarded because interview comments did not correspond with the questionnaire data. A sample of 618 usable end users' responses was obtained. This sample represented 250 different applications with an average of 2,5 responses per application. Bartlett's test of sphericity had a chi-square value of 8033.46 and a significance level of .00000. This suggests that the intercorrelation matrix contains enough common variance to make factor analysis worth pursuing. The ratio of sample size to number ot items (34:1) was well above the minimum 10:1 ratio suggested for factor analysis by Kerlinger (1978). However, in this case, a large sample was considered essential. The items being factor analyzed were selected because they were closely related to each other {i.e., all items were thought to be measures of the same EUCS construct). Thus, the items could be expected to have considerable common variance and relatively large error variance compared to their unique variance. To assess reliability and validity by nature and type of application, users were asked whether their application was: end-user developed; microcomputer or mainframe: and monitor, exception reporting, inquiry or analysis (Alloway and Ouillard, 1983).

Sample characteristics
The sample contains responses from a variety of industries and management levels (see Table 1). The respondents indicated that 41.9 percent of the applications were "primarily developed by an end user" but only 91 (14.7 percent) had personally developed the application themselves. Twenty five percent were microcomputer appli cations whereas 75 percent were mini or main frame applications. The applications were 37,6 percent decision support, 19.3 percent database, 19.8 percent exception reporting, 19.9 percent monitor, and 3.4 percent other (e.g.. word processing).

Data Analysis
The researchers conducted an exploratory factor analysis and modified the instrument, examined discriminant validity of the modified instrument, and assessed reliability and cnterion-related validity by nature and type of application (Kerlinger, 1978; Schoenfeldt, 1984). Factor analysis was used to identify the underlying factors or components of end-user satisfaction that comprise the domain of the end-user satisfaction construct. Items which were not factorially pure were deleted to form a modified instrument that would facilitate the testing of more specific hypotheses (Weiss, 1970). The researchers attempted to avoid the use of imprecise and ambiguous terms to label the factors (Bagozzi,

Table 1. Respondents by Industry and Position Respondents by Industry Manufacturing Finance, banking & insurance Education Wholesale & retail Transportation, communication & utilities Government agencies Health services/hospitals Other Total Respondents by Position Top management Middle management First level supervisor Professional employees without supervisory responsibilities Other operating personnel Total 42.6% 4.5% 3.7% 6.5% 9.5% 9.1% 16.7% 7.4% 100.0% 4.2% 31.2% 20.4% 28.7% 15.5% 100,0%

MIS Quarterly/June 1988 265

End-User Satisfaction

1981), and examined discriminant validity (Campbell and Fiske, 1959).

Factor analysis
Using the sample of 618 responses, the data was examined using principal components analysis as the extraction technique and varimax as a method of rotation. Without specifying the number of factors, three factors with eigen values greater than one emerged. These factors were interpreted as content / format, accuracy / timeliness, and ease of use/efficiency. These labels were considered imprecise because factors appeared to contain two different types of items (e.g., content and format items; accuracy and timeliness items). To achieve more precise and interpretable factors, the analysis was conducted specifying two, four, five and six factors. The researchers felt that specifying five factors resulted in the most interpretable structure. These factors were interpreted as content, accuracy, format, ease of use, and timeliness and explained 78.0 percent of the variance. The loadings of the 18 items on each factor (for factor loading greater than .30) is depicted in Table 2 and a description of each item (Cl thru T2) is provided in Table 3.

The items are grouped in Table 2 by their highest (primary) factor loading. A number of items had factor loadings above .3 or .4 on additional (nonprimary) factors. Items with many multiple loadings may be excellent measures of overall end-user satisfaction, but including them in the scale blurs the distinction between factors. To improve the distinction between factors, items which had factor loadings greater than .3 on three or more items were deleted from the scale this includes C5, A3, A4, F3, F4, and E3. These deletions resulted in a 12-item scale for measuring end-user computing satisfaction and improved the match between the factor labels and the questions. In the modified 12-item instrument, only one item (C4) had a primary factor loading below .7. Furthermore, none of the items had a secondary loading above .4. Each of these 12 items had a corrected item total correlation above .63 (a measure of internal consistency) and a correlation with the criterion measure of above .51 (see Table 3). Figure 3 illustrates this modified model for measuring end-user computing satisfaction. This 12-item instrument had a reliability of .92 and a criterion-related validity of .76. The criterion was the separate measure of overall enduser satisfaction with the application. The reliability (alpha) of each factor was: content = ,89;

Item Code

Cl C2 C3 C4 C5 Al A2 A3

F1 F2 F3

El E2 E3 T1 T2

Table 2.Rotated Factor Matrix of 18-ltem Instrument Ease of Accuracy Format Content Use .30926 .74759 .73854 .36191 .71888 .34585 .66369 .41323 ,36602 .51188 .85959 ,83729 .73136 .30158 .34685 .56169 .30357 .78831 .71263 .35880 .64593 .32791 .42106 .58806 .44132 .32981 .82396 .34352 .80421 .55695 .35998 .40981 .32913



.32456 .39654

.34936 .77654 .77251


MIS Quarterly!Jur)e 1988

End-User Satisfaction

Table 3. Reliability and Criterion-Related Validity of Measures of End-User Satisfaction Corrected ItemTotal Correlation .77 ,76 Correlation With Criterion ,62 .62

Item Item Code Description CI Does the system provide the precise information you need? C2 Does the information content meet your needs? C3 Does the system provide reports that seem to be just about exactly what you need? C4 Does the system provide sufficient information? C5 Do you find the output relevant? Al Is the system accurate? A2 Are you satisifed with the accuracy of the system? A3 Do you feel the output is reliable? A4 Do you find the system dependable? F1 Do you think the output is presented in a useful format? F2 is the information clear? F3 Are you happy with the layout of the output? F4 Is the output easy to understand? El Is the system user friendly? E2 Is the system easy to use? E3 Is the system efficient? T1 Do you get the information you need in time? T2 Does the system provide up-to-date information?

.72 .70 .76 .69 .68 .73 .70 .66 .72 .73 .75 .63 .67 .75 .69 .67

.60 ,55 ,59 .54 .51 ,54 ,65 .54 .55 .58 .57 .52 .57 .68 .56 ,55

accuracy = ,91: format = ,78: ease of use = ,85: and timeliness = .82, The correlation of eachfactorwith the criterion was: content ^ .69: accuracy = ,55; format = .60: ease of use = ,58; and timeliness = .60.

accuracy - ,82; format = 64: ease of use .75; and timeliness - .70. For a sample of 618. these are significantly different than zero (p .000) and large enough to encourage further investigation. Using the MTMM approach, discriminant validity is tested for each item by counting the number of times it correlates more highly with an item of another variable (factor) than with items of its own theoretical variable. Campbell and Fiske (1959) suggest determining whether this count is higher than one-half the potential comparisons. However, in this case, common method variances are present so it is unclear how large a count would be acceptable. An examination of the matrix in Table 4 reveals zero violations (out of 112 comparisons) of the

Convergent and discriminant validity analysis

Table 4 presents the measure correlation matrix, means, and variances. The multitrait-multimethod (MTMM) approach to convergent validity tests that the correlations between measures of the same theoretical construct are different than zero and large enough to warrant further investigation. The smallest withinvariable (factor) correlations are: content = .59;

MIS Quarterly/June 1988 267

End-User Satisfaction

condition for discriminant validity. For example, the lowest correlation between C1 and other content items is .67 with 04. This correlation is higher than C1's correlation with the other eight noncontent items. Each ot the 12 items are more highly correlated with the other item(s) in its group than with any of the items measuring other variables.

Reliability and criterion validity analysis by nature and type of application

Table 5 describes the reliability and criterionrelated validity ot the 12-item scale by nature and type of appiication. The instrument appears

End-User Computing Satisfaction

CONTENT C l : Does the 02: Does the 03: Does the C4: Does the

system provide the precise information you need? information content meet your needs? system provide reports that seem to be just about exactly what you need? system provide sufficient information?

ACCURACY A1: Is the system accurate? A2: Are you satisified with the accuracy of the system? FORMAT F1: Do you think the output is presented in a useful format? F2: Is the information clear? EASE OF USE E l : Is the system user friendly? E2: Is the system easy to use? TIMELINESS T1: Do you get the information you need in time? T2: Does the system provide up-to-date information? Figure 3. A Model for Measuring End-User Computing Satisfaction


MIS Quarterly/June 1988

End-User Satisfaction

Table 4. Correlation Matrix of Measures (n = 618) C2

C4 A1 A2 F1 F2 E1 E2 T1 T2

.72 .68
.67 .49

.68 .66 .49 .45 .56 .55 .51 .51 .53 .51 02

.41 .41 .56

.48 .52 .56 .51 .52 .53 .52 01

.55 .48
.56 .55 .41 .41 .50 .55 04

.82 .42


.54 .46 .47 .47 .45 03

.53 .37 .39

.53 .57 Al

.64 .37 .43 .43 .44 F1 .44 .56 .46 .48 F2 .75 .46 .44 El .44 .37 E2 .70 T1

.39 .39
.51 .54 A2

VARIABLE (ITEM) 01 C2 C3 04 A1 A2 F1 F2 E1 E2 T1 T2

Mean 3.891 3.972 3.862 4.037 4.297 4.207 4.099 4.286 3.964 4.080 4.096 4.247

VARIANOE .920 .822 1.056 .799 .729 .754 .668 .660 1.238 1.028 .950 .853

Table 5. Scale Reliability and Criterion Validity By Nature and Type of Application Cronbach's Alpha for 12-ltem Scale For All Applications Micro Computer Application? Yes { n - 147) No ( n - 429) Type of Application? Other (word processing) {n= 19) Monitor Applications (n -112) Exception Application (n^117) Inquiry Applications (n -111) Analysis Applications (n-223) End-User Developed Application? Yes (n = 236) No (n = 321) Significant at p< .000. .92 .91 .93 .94 .93 .90 .92 Correlation Between Criterion and 12-lteni Scale .76" .64* .78' .85' .84' .65" .68* .79* .72' .77'

.94 .91 .93

MIS Quarterly:Jur)e 1988 269

End-User Satisfaction

to have more than acceptable reliability and criterion-related validity for microcomputer and mainframe applications, for monitor, exception, inquiry or analysis applications, and for end-user developed applications as well as those developed by more traditional methodology. The reliability was consistently above .90 and showed little variation by nature and type of application. With a minimum standard of .80 suggested for basic research and .90 suggested for use in applied setting where important decisions will be made with respect to specific test scores (Nunnally. 1978) the instrument's reliability is adequate for both academic research and practice. The correlations between the criterion question and the 12-item scaie were consistently high (greater than .5) but, interestingly, showed more variation by nature and type of application. Mini or mainframe applications had a correlation of .78 with the criterion compared to .64 for microcomputer applications. Analysis (.79) and monitor (.84) applications had higher correlations with the criterion than exception (.65) or inquiry (.68) applications. In summary, it is the opinion of the researchers that the instrument presented in this article represents substantial progress towards establishment of a standard instrument for measuring enduser satisfaction. The data supports the construct and discriminant validity of the instrument. Furthermore, the instrument appears to have adequate reliability and criterion-related validity across a variety of applications. However, continuing efforts should be made to validate the instrument. The test-retest reliability of the instrument should be evaluated and another large multi-organizational sample should be gathered to confirm factor structure and discriminant validity.

relationship between end-user involvement in design and end-user satisfaction. Finally, suggestions for further research are discussed.

Practical application
This 12-item instrument may be utilized to evaluate end-user applications. In addition to an overall assessment, it can be used to compare enduser satisfaction with specific components (i.e., content, format, accuracy, ease of use, or timeliness) across applications. Although there may be reasons to add additional questions to evaluate unique features of certain end-user applications, this basic set of 12 items are general in nature, and experience indicates that it can be used for all types of applications. This provides a common framework for comparative analysis. The sample data used in this study represents the major applications from 44 firms. This cross organizational aspect of the sample makes it appropriate for the development of tentative standards. Percentile scores for the 12-item enduser computing satisfaction instrument are presented in Table 6. Other relevant sample statistics are: minimum = 16; maximum - 60: mean = 49.09; median = 51; and standard deviation - 8.302. These statistics may be useful in more precisely evaluating end-user satisfaction with a specific application. Table 6. Percentile Scores 12-ltem Instrument Percentile 10 20 30 40 50 60 70 80 90 Value 37 43 46 48 51 53 54 57 59

Exploring the Instrument's Practical and Theoretical Application

In this section, suggestions are made for practical application of the instrument. Then tentative standards for more precisely evaluating enduser applications are presented. Next, the usefulness of the instrument for developing and testing more precise research questions is illustrated by exploring some hypotheses concerning the

Theoretical application
In the development of this instrument, items which were not factorially pure were deleted. The five resultant components are relatively independent of each other. With such component measures, researchers may be able to achieve more precision in their research questions. Some components may be thought more closely associ-


MIS Quarterly/June 1988

End-User Satisfaction

ated with specific independent variables than others. The instrument provides a framework for formulating and testing such hypotheses. User information satisfaction has been used extensively in studies of user involvement (Ives and Olson, 1984); however, these studies used general measures and did not explore research questions concerning the components of satisfaction. For example, satisfaction with accuracy and timeliness are affected by how the application is operated (i.e., the promptness and care in data entry). In contrast, design rather than operational issues may be the dominant factors affecting satisfaction with content, format, and ease of use. Thus, one might expect end-user involvement in design to be more closely associated with content, format, and ease of use than accuracy or timeliness. This suggests two sets of hypotheses; the first is general in nature and the second is more precise. H I : User participation in design is positively correlated with end-user computing satisfaction and each of its components. H2: User participation in design is more closely correlated with content, format and ease of use than accuracy or timeliness. These hypotheses are used to illustrate the usefulness of the end-user satisfaction instrument for examining such research questions. To explore these hypotheses, the researchers developed an eight-item Likert type scale for measuring user involvement in the end-user context. End users were asked about the amount of time they spent in specific design activities (e.g., initiating the project, determining information needs, developing output format, etc.). This instrument had a reliability (Oronbach alpha) of .96. The results depicted in Table 7 support the first set of hypotheses. End-user satisfaction and each of its components are significantly correlated with the end-user s involvement in the design of the application. To examine results for the second set of hypotheses, absolute differences between correlation coefficients were calculated (see Table 8). The results for ease of use do not support the second hypothesis. End-user involvement in design was less positively correlated with ease of use than accuracy or timeliness. With respect to the results for content and format partially sup-

Table 7. Correlation Between End-User Involvement in Design and End-User Computing Satisfaction Constructs End-User Involvement in Design Overall EUOS Oontent Accuracy Format Ease of use Timeiiness ' Significant at p = .000. .32'

.30* .21* .29* .20'


port the second hypothesis. End-user involvement in design was more positively correlated with content and format than accuracy or timeliness. Using a test of the difference between correlation coefficients (Oohen and Cohen, 1975). two of these differences (content-accuracy and format-accuracy) were found to be significant at p < .05. Table 8. Matrix of Difference in Correlations Accuracy Timeliness Oontent Format Ease of Use ' Significant at p = .05. The intent is not to test hypotheses per se or explain the results obtained, but rather to illustrate the usefulness of the end-user satisfaction instrument for developing and testing more precise research questions. The results suggest that some of the end-user satisfaction components denved by factor analysis may be more closely related to independent variables than others. In this illustration, end-user involvement in design was used as the Independent variable. Future research efforts might focus on other independent variables such as end-user skill levels. EDP support policies, type of application, or the quality of user documentation. .09* .08' .01 .05 .04 .05

This article presents significant progress towards the development of a standard measure of enduser satisfaction with a specific application. Designed for an end-user computing environment

MIS Quarterly/June 1988 271

End-User Satistactior}

rather than traditional data processing, the instrument merges ease of use and information product items. Whether this instrument is chosen, the authors encourage the MIS research community to move towards a standard instrument for measuring end-user satisfaction which includes both information product and ease ot use items. The instrument appears to have adequate reliability and validity across a variety of applications. It is short, easy to use, and appropriate for both practical and research purposes. Standards are provided for use by practitioners. Its component factors are distinct, enabling researchers to develop and test more precise research questions. The lack of adequate mechanisms to evaluate the effectiveness of end-user computing is evident. End-user satisfaction is only one of several relevant measures of end-user computing success. Additional work is needed to develop measures of the breadth of end-user computing in an organization (i.e., penetration) and the degree of sophistication (i.e., skill) of individual end users. Research on end-user computing s impact on efficiency, productivity, and competitive advantage would benefit from the availability of such measures.

Alioway. R.M. and Quillard, J.A. "User Managers' Systems Needs." MIS Quarterly (7:2), June 1983, pp. 27-41. Bagozzi, R,P. An Examination of the Validity of Two Models ot Attitude," Multivariate Behavioral Research (16). 1981, pp. 323-359. Bailey, J.E. and Pearson, S.W. "Development of a Tool for Measuring and Analyzing Computer User Satisfaction," Management Science (29:5). May 1983, pp. 530-545. Barki, H. and Huff, S.L "Change, Attitude to Change, and Decision Support System Suc-

cess," Information and Management (9),

1985, pp, 261-268, Baroudi, J.J., Olson, M.H., and lves, B. An Empirical Study of the Impact of User Involvement on System Usage and Information Satisfaction," Communications of the ACM (29:3), March 1986, pp. 232-238. Benjamin, R,l, "Information Technology in the 199O's: A Long Range Planning Scenario," MIS Quarterly (6:2), June 1982, pp. 11-31. Benson, D H . A Field Study of End-User Computing: Findings and Issues," MIS Quarterly'

(7:4), December 1983, pp. 35-45. Branscomb, L.M. and Thomas J.C. "Ease of Use: A System Design Challenge," IBM Systems Journal (23), 1984, pp, 224-235. Campbell, DT. and Fiske, D.W. Convergent and Discriminant Validation by the MultitraitMultimethod Matrix," Psychological Bulletin (56:1), 1959. pp, 81-105. Canning, R.G. "Programming by End Users," EDP Analyzer (19:5), May 1981. Cheney, P.H,, Mann, R.I., and Amoroso. D.L "Organizational Factors Affecting the Success of End-User Computing," The Journal of Management Information Systems (3:1), Summer 1986, pp. 65-80. Cohen, J. and Cohen, P. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, Lawrence Eribaum Assoc., Hillsdale, NJ, 1975. Crandall, R.H, "Information Economics and Its Implications for the Further Development of Accounting Theory," The Accounting Review (44), 1969, pp, 457-466, Davis, G.B. and Olson, M.H. Management Information Systems: Conceptual Foundations. Structure, and Development, McGraw-Hill Book Co., New York, 1985, pp. 532-533. Debons, A., Ramage, W., and Orien, J. "Effectiveness Model of Productivity," in Research on Productivity Measurement Systems for Administrative Sen/ices: Computing and Information Services (2), LF. Hanes and C H- Kriebel (eds,), July 1978, NSF Grant APR - 20546. Fishbein, M. and Ajzen, I, Be//e^, Attitude, Intention and Behavior: An Introduction to Theory and Research, Addison-Wesley, Reading, MA, 1975, Gallagher, C.A. "Perceptions of the Value of a Management Information System," Academy of Management Journal (17:1), 1974, pp. 4655. Gailetta, D.F. "A Longitudinal View of an Office System Failure," SIGOA Bulletin (7:1), 1986, pp. 7-11. Gailetta, D.F. and Lederer, A.L. "Some Cautions of the Measurement of User Information Satisfaction," Graduate School of Business, The University of Pittsburgh, Working Paper WP643, November 1986, Goodwin, N.C, "Functionality and Usability," Communications of the ACM (30:3), March 1987, pp. 229-333. Henderson, J.C. and Treacy, M.E. Managing End-User Computing for Competitive Advantage," Sloan Management Review. Winter 1986, pp. 3-14.


MIS Quarterly/June 1988

End-User Satisfaction

ives, B. and Olson. M. "User Involvement and MIS Success: A Review of Research." Management Science (30:5). 1984. pp. 586-603. Ives, B., Olson, M,, and Baroudi, S. "The Measurement of User Information Satisfaction," Communications of the ACM (26:10), October 1983, pp. 785-793. Kerlinger, F.N. Foundations of Behavioral Research, McGraw-Hill, New York. 1978. Lefkovits, H.C. "A Status Report on the Activities of Codasyl End-User Facilities Committee (EUFC)," Information and Management (2), 1979, pp, 137-163. Mahmood, M.A. and Becker, J.D. "Effect of Organizational Maturity on End-User Satisfaction with Information Systems," Journal of Management Information Systems (2:3), Winter 1985-86, pp. 37-64. Martin, J. Application Development Without Programmers. Prentice Hall, Inc, Englewood Cliffs, NJ, 1982, pp. 102-106. McLean, E.R. "End-Users as Application Developers," MIS Quarterly (3:4), December 1979, pp. 37-46. Neumann, S. and Segev, E, "Evaluate Your Information System," Journal of Systems Management (31:3), March 1980, pp. 34-41. Nolan, R. and Gibson. C.F. "Managing the Four Stages of EDP Growth." Harvard Business Review (52:1), January/February 1974, pp. 7688. Nolan, R. and Seward, H. "Measuring User Satisfaction to Evaluate Information Systems," in Managing the Data Resource Function. R.L. Nolan (ed.). West Publishing Co., Los Angeles, 1974. Nunnally, J,C. Psychometric Theory, McGrawHill. New York, 1978, p, 245. Raymond, L. "Organizational Characteristics and MIS Success in the Context of Small Business," MIS Quarterly (9:1), 1985, pp, 37-52, Rockart, J.F, and Flannery, L.S. "The Management of End User Computing," Communications of the ACM (26:10), October 1983. pp. 776-784. Schoenfeldt. L.F. Psychometric Properties of Organizational Research Instruments," in Methods and Analysis in Organizational Research, T.S. Bateman and G.R, Ferris (eds.), Reston Publishing Co., Reston, VA, 1984, pp. 68-80, Sondheimer, N. and Relies, N. "Human Factors and User Assistance in interactive Computing Systems: An Introduction," IEEE Transactions on Systems, Man, and Cybernetics SMC-12 (2), March-April 1982, pp. 102-107.

Sprague, R.H. 'A Framework for the Development of Decision Support Systems." MIS Quarterly (4:4), 1980. pp, 1-26. Swanson, E.B. "Management Information Systems: Appreciation and Involvement," Management Science (21:2), October 1974, pp. 178188. Treacy, ME. "An Empirical Examination of a Causal Model of User Information Satisfaction," Center for Information Systems Research, Sloan School of Management, Massachusetts institute of Technology, Apnl 1985. Yavelberg. I.S. "Human Performance Engineering Considerations For Very Large ComputerBased Systems: The End User." The Bell System Technical Journal (61:5), May-June 1982, pp. 765-797. Weiss, D.J. "Factor Analysis in Counseling Research," Journal of Counseling Psychology (17), 1970. pp. 477-485.

About the Authors

William J. Doll is a professor of MIS and strategic management at The University of Toledo and serves as a management consultant for area companies. The author of many articles in academic and professional journals Including the Academy of Management Journal, Communications of the AOM. MIS Quarterly. Information & Management, and the Journal of Systems Management, Dr. Doil has a doctoral degree in business administration from Kent State University and has worked as a senior management systems analyst on the corporate staff of Burroughs Corporation. G. Torkzadeh is an assistant professor of infor mation systems in the Operations Management Department at The University of Toledo. He hoids a Ph.D. in operations research from The University of Lancaster, England, and is a member of the O.R. Society of Great Britain, TIMS, DSI, ACM and SIM. He has been involved in research programs pertaining to the application of O.R. (in the public sector), distribution resource allocation / re-allocation, and mathemati cal modelling, and has published in the Journal of Operational Research Society. Communications of the ACM, and Information & Management. One of his current research interests is the management of the information systems function.

MIS Ouarterly'June 1988 273

End-User Satisfaction

Measures of End-User Computing Satisfaction Forty Items Used in Pilot Study
1. Is the system flexible? 2. Does the system provide out-of-date information? 3. Is it easy to correct the errors? 4. Do you enjoy using the system? 5. Do you think the output is presented in a useful format? 6. Is the system difficult to operate? 7. Are you satisfied with the accuracy of the system? 8. Is the Information dear? 9. Are you happy with the layout of the output? 10. is the system accurate? 11. Does the system provide sufficient information? 12. Does the system provide up-to-date information? 13. Do you trust the information provided by the system? 14. Do you get the information you need in time? 15. Do you find the output reievant? 16, Do you feei the output is reliable? 17. Does the system provide too much information? 18. Do you find the information up-to-date? 19, Does the system provide reports that seem to be just about exactly what you need? 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. is the system successful?" is the system easy to use? is the system user friendly? Are the reports complete? Does the system provide the precise information you need? is the system efficient? Istheoutput easy to understand? Is the system troublesome? Is the system convenient? is the system difficult to interact with? Does the system provide comprehensive information? Do you think the system is reliable? Would you like more concise output? Does the information content meet your needs? Does the information you receive require correction? Do you find the system dependable? Would you like the system to be modified or redesigned? Do you think the reports you receive are somewhat out-of-date? Are you satisfied with the system?* Would you like the format modified? Do you get information fast enough?

' Criterion question.


MIS Quarterly/June 1988