Computers in Human Behavior

Computers in Human Behavior 23 (2007) 1582–1596

Measuring ERP success: The key-usersÕ viewpoint of the ERP to produce a viable IS in the organization
Jen-Her Wu
a b


, Yu-Min Wang


Department of Information Management, National Sun Yat-sen University, 70 Lien-Hai Road, Hsi-Tze Wan, Kaohsiung, Taiwan 80424, Taiwan Department of Information Management, National Chi-Nan University, 1 University Road, Puli, Nantou Hsien,Taiwan Available online 1 September 2005

Abstract Enterprise resource planning (ERP) systems are becoming mature technologies to support inter- and intra-company business processes even in small and medium enterprises. However, ERP systems are complex and expensive, and the decision to install an ERP system necessitates a choice of mechanisms for determining whether the ERP is needed, and once implemented, whether it is successful. User satisfaction is one evaluation mechanism for determining system success. This study looked at key-user satisfaction as a means of determining system success. Initial analyses of ERP system characteristics important for the environment were explored, and some previously validated user satisfaction instruments were selected for examination, using rigorous and systematic interview techniques and iterative development methods. A questionnaire was developed and then tested to prove its reliability and validity. Finally, a relationship was shown to exist between key-user satisfaction and perceived system success. Ó 2005 Elsevier Ltd. All rights reserved.



Corresponding author. Tel.: +886 7 525 2000x4722; fax: +886 7 525 4799. E-mail addresses: (J.-H. Wu), (Y.-M. Wang). Tel.: +886 49 2910960x4820.

0747-5632/$ - see front matter Ó 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2005.07.005

J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596


1. Introduction Enterprise resource planning (ERP) is now considered to be the price of entry for running a business (Kumar & van Hillegersberg, 2000). It integrates core corporate activities and diverse functions by incorporating best practices to facilitate rapid decision-making, cost reduction, and greater managerial control. However, ERP systems are complex and expensive, and the decision to install an ERP system necessitates a choice of mechanisms for determining whether the ERP is needed, and once implemented, whether it is successful. Although it may be more desirable to measure system success in terms of monetary costs and benefits, such measures are often not possible due to the difficulty of quantifying intangible system impacts and of isolating the effect of the information system from the numerous intervening environmental variables that may influence organizational performance (DeLone & McLean, 1992). Consequently, the user-satisfaction construct has often been used as a surrogate for information systems (IS) success or IS effectiveness (Doll, Raghunathan, Lim, & Gupta, 1995; Harrison & Rainer, 1996; Huang, Yang, Jin, & Chiu, 2004; Ives, Olson, & Baroudi, 1983). It is believed that satisfied users will be more productive, especially where usage is mandatory (Calisir & Calisir, 2004). In an ERP implementation process, there are two main types of user: key-user and end-user. Key users are selected from operating departments and generally familiar with business processes and having domain knowledge of their areas. They will be the developers of the requirements for the ultimate system. In addition, key users will specialize in parts of the ERP system and act as trainers, help-desk resources, educators, advisors, and change agents for end users. In contrast to key users, end users are the ultimate users of the ERP system. They have only very specific knowledge of the parts of the system they need for their work (Hirt & Swanson, 1999). Therefore, key-usersÕ role is essential to the ultimate systemÕs success; the measurement of key-user satisfaction with the entire ERP experience coupled with the perception of success of the final IS product is the essential theme of this article. This study was initiated to show how to measure key-user satisfaction and to also offer a proof-by-analysis that ERP key-user satisfaction is closely related to perceived system success.

2. Background 2.1. User satisfaction measurement User satisfaction (US) is the sum of oneÕs feelings and attitudes toward a variety factors related to the delivery of information products and services (Ives et al., 1983). It is an overall measure of how satisfied a user is with his or her information system. Bailey and Pearson (1983) first developed a valid and useful US measure with 39 items. The instrument provides a broad and complete base of satisfaction-related themes. Building upon the work of Bailey and Pearson, Ives et al. (1983) established


J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596

a 13-item ‘‘short-form’’ instrument. The instrument is comprised of three factor measures: information product, EDP (MIS) staff and service, and user knowledge and involvement. Baroudi and Orilowski (1988) confirmed the three-factor structure and supported the diagnostic utility of the short-form instrument. Igbaria and Nachman (1990) and Doll et al. (1995) reexamined the instrument of Ives et al. and provided the empirical evidences that supported the 13-item instrument as a measure of user satisfaction. The short-form instrument has been useful in measuring user satisfaction in a traditional IS environment (Galletta & Lederer, 1983) or in the context of large IS-developed transaction processing information systems, where user involvement is thought to play an important role (Baroudi & Orilowski, 1988). Sengupta and Zviran (1997) later reconfirmed the usefulness of the short form and added a new factor, contractor services factor to it.

2.2. The roles of key-users and ERP implementation context The overall life cycle of adoption and use of ERP systems within the ultimate user organization may be the responsibility of a special group whom are usually termed key-users, they are selected from operating departments and must be intimately familiar with business processes and have domain knowledge of their areas. A somewhat simplified history of the process involving key-users in the ERP life cycle within the firm may occur as follows: 1. The management of the organization has heard or determined that an ERP could be purchased to help in reducing software implementation cost and time. The keyuser group is formed and tasked with assessing the potential of using an ERP, which entails them making some preliminary assessment of the requirements that can be satisfied by the use of the ERP and showing the expected value of the purchase, etc. 2. Once the ERP proposal is internally accepted, the key-users, as a group, must help to select the appropriate vendor and act with them and any implementation contractor in completing the requirements definition and implementation phases. 3. Because organizations seldom develop their own ERP systems in-house, the next stage involves the selection of the contractors. 4. This is followed by the implementation phase where the contractors work under the direction by the key-user project team. In an ERP outsourcing environment, because the systems are configurable IS packages, customization generally involves intense cooperation between key-users and contractors. 5. Finally, once the ERP system has been implemented, the key-users train their endusers. In an ERP outsourcing environment, two stakeholders generally participate in the implementation process: an internal project team to define the needs and an external contractor to provide a system to satisfy the requirements as depicted in Fig. 1. Typically, a project team will consist of top management, MIS staff, and

J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596


Fig. 1. The ERP implementation context.

key users. During the implementation process, key users communicate with the contractors and learn system functionality and uses. Once the ERP system has been implemented, the key users then train end users. Key users and end users both interact directly with the ERP system. The role of the MIS staff changes from that of system developer to that of supporting participant during ERP system implementation. The external contractors may need to employ consultants, vendors, and third parties. The consultants communicate with key users to establish the acquiring organizationÕs standard operating procedures (SOPs) and identify differences between the organizationÕs business requirements and the functionality provided by the ERP system. The vendors and third parties may provide solution, design, or customization support according to SOP specifications, install the ERP system, and provide training to key users. Based on above analysis, an ERP implementation context is different in nature from almost any other information systems (IS). This being so, any instrument developed for measuring user satisfaction of other IS may not apply to measuring satisfaction in an ERP environment, which may need to consider other factors and characteristics, such as the quality of the project team, ERP system product, and user–contractor interaction. However, the factors of ERP product, and contractor service, and user knowledge and involvement based on the previous work can be a good starting point when considering suitable constructs in measuring ERP user satisfaction.


J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596

3. Research methods A two-phased approach was used in constructing the measure: Phase 1. Develop an initial measure list by taking results from the literature review and examining ERP characteristics and then determining whether it was complete and clear by using it in five interviews of key-users and consultants of ERP. Phase 2. Revise the list accordingly and use it in a pilot test with 30 key-users of ERP systems. As a result, the initial survey instrument was extensively revised. The new instrument was then tested via a survey of key-users selected from the top 1000 enterprises in Taiwan. Finally, an empirical test of the validity and reliability of the instrument was conducted.

3.1. Measures To ensure that a comprehensive list of scales was included, the works of previous researchers were reviewed: prior studies (Bailey & Pearson, 1983; Baroudi & Orilowski, 1988; Ives et al., 1983; Sengupta & Zviran, 1997) indicated that user satisfaction measurement should include four factors: MIS staff and services, information product, contractor service and userÕs knowledge and involvement. However, organizations seldom develop their own ERP systems in-house and are spending billions of dollars on implementing vendor developed software packages. Typically, ERP implementation in organizations is being directed by an ERP project team, external consultants and suppliers; ‘‘MIS staff and service’’ is clearly inappropriate to our measure and is revised as ‘‘contractor services’’. The factor includes three items: ‘‘professional capabilities of consultants/suppliers’’, ‘‘technical competence of contractors/suppliers’’, and ‘‘training’’. Items were added and modified to the list if they were relevant to ERP characteristics or the target environment; e.g., system stability, auditing and control, output requirement, and feeling of user involvement. Finally, two global items, perceived overall satisfaction and success, based on Doll and Torkzaden (1988) are included for verifying the construct and criterion-related validity of the instrument. An initial list of items is shown in Table 1. The resultant list of 21 items for measuring key-user satisfaction was used in a series of personal interviews in order to refine the instrument. These interviews allowed us to gauge the clarity of the questionnaire, to assess whether the instrument was capturing the phenomenon desired, and to verify that no important aspects of items were omitted. The process was continuing until no further modification to the questionnaire occurred; this took five iterations involving one consultant and four firms. Feedback served as a basis for correcting, refining, and enhancing the experimental items. An item was eliminated if we found that it represented the same aspect as another with only slightly different wording or else it was modified if its semantics

J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596 Table 1 An initial measure list of key-user satisfaction 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23


Professional capabilities of consultants/suppliers Technical competence of consultants/suppliers Domain knowledge of contractors Training Documentation Required time for ERP implementation Accuracy Timeliness Reliability Response time Completeness Output requirement Relevancy System stability Auditing and control Ease of use Usefulness Feeling of user involvement System understanding System flexibility System integrity Overall satisfactiona Perceived success levela Criterion item.

appeared to be ambiguous or irrelevant. For instance, respondents said that if ERP system satisfied the output requirement (including accuracy or precision), then the concepts could be better represented by whether the system allowed customization. Items were added to the list if they were deemed relevant to ERP operations or the target environment. For instance, the ability to present information that not only meets user needs but also satisfies them by allowing variety in their outputs was considered necessary. Therefore, output flexibility was added. Also, the team remarked on the importance of top management involvement, domain knowledge of consultants/ suppliers, and project management of consultants/suppliers, and these were also added. The resulting revised list is shown in Table 2. Each item was presented to the respondents in the form shown in Table 3. Scaling of the seven intervals for each item was quantified by assigning the integer values from À3 to +3. Using these, individual reactions to given items were taken as the averages of two values assigned to the subitem adjective. 3.2. The pilot study The purpose of the pilot test was to confirm the completeness and importance of each item in the instrument and eliminate logically duplicative ones. The 30 key-users were asked to fill out questionnaires. They were also asked to assess the importance


J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596

Table 2 Refined key-user satisfaction list 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

Top management involvement Domain knowledge of consultants/suppliers Related experience of consultants/suppliers Project management of consultants/suppliers Technical competence of consultants/suppliers Customization Documentation Required time for ERP implementation Training Accuracy Timeliness Reliability Response time Completeness Output flexibility Relevancy System stability Auditing and control Ease of use Usefulness Feeling of user involvement System understanding System flexibility System integrity Overall satisfactiona Perceived success levela Criterion item.

Table 3 Presentation format of each item Documentation: The user guide, operation guide, manual, and any formal document required for the ERP system is Incomplete __:__:__:__:__:__:__ complete Hazy __:__:__:__:__:__:__ clear

of each item. An overall importance rating was obtained by calculating the mean of the scores for all 30 respondents; scales were given scores of 1 (not important) to 7 (very important). The results indicated that each item scored 4 or higher in over 80% of the responses, and suggesting that no wording revision or new items were needed and establishing instrument completeness. Inter-item correlation results showed that there were no ‘‘double-barreled’’ conflicts among adjective pairs. Correlations among all pairs of items within the 24-item instrument were examined. If a pairÕs correlation coefficient was higher than 0.7 and significant at p < 0.001 for duplicated meaning, the pair item with the lower item-to-sum correlation was considered for

J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596


elimination to improve instrument readability and parsimony. Application of this criterion showed seven item pairs, (7, 9), (11, 14), (15, 16), (10, 19), (20, 10), (20, 11) and (23, 24) had similar and highly related meanings and significant correlations. Therefore, items 7, 11, 15, 19, 20, and 23 were eliminated. The remaining 18 items had a reliability of 0.93 and a correlation of 0.81 with the overall-satisfaction criterion item.

4. Survey methods and sample Data were collected using a questionnaire survey administered in Taiwan. The top-1000 firms were included, but only those with ERP systems implemented by a vendor were selected. This provided a sample of 617 Taiwan firms; of these, 587 received initial phone calls explaining the purpose of our project and inquiring whether the firm would be willing to participate in the study. A contact person was identified at each company; this person was asked to distribute the self-administered questionnaires to company key-users. We sent out 730 questionnaires and 215 completed questionnaires were returned. Ten responses were considered incomplete and had to be discarded. This left 205 valid responses, a response rate of 28% of the original sample. The respondents represented a broad cross-section of management levels and ERP modules, as shown in Table 4.

5. Exploratory analysis 5.1. Item analysis and reliability estimates Prior to conducting formal factor analysis, internal consistency (a-coefficient) had to be examined to ensure measures were unidimensional and to eliminate ‘‘garbage
Table 4 Profile of study respondents Respondents by position Manager/supervisor Professional employees without supervisory responsibility General employee Respondents by ERP functional module Financial and accounting Sales and orders Production and manufacturing Delivering Purchasing Supplier chain management Personnel management Warehousing and inventory Others 21.8% 53.0% 25.2% 16.7% 14.7% 14.6% 5.9% 15.3% 9.8% 5.1% 15.9% 2.0%


J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596

items’’ (Churchill, 1979; Sethi & King, 1991). The results showed the 18-item ERP key-user satisfaction instrument had a reliability of 0.916, indicating that the test would correlate well with true scores. Correlations of each item with the sum of scores on all items were plotted in descending order. Items were eliminated if their score correlation sums were less than 0.4, and when correlations produced substantial or sudden drops in the plotted pattern (Churchill, 1979). The ‘‘top management involvement’’ item was eliminated. The remaining 17 items retained a reliability of 0.920. 5.2. Exploratory factor analysis Exploratory factor analysis (EFA) was used to validate the various dimensions underlying the data set. The 205 responses were examined using principal-components factor analysis as the extraction technique and varimax as the orthogonal rotation method. In order to derive a stable factor structure, three commonly employed decision rules were applied to eliminate items: (1) eigenvalue less than 1; (2) loadings of less than 0.35 on all factors; and (3) loadings greater than 0.35 on two or more factors (Sethi & King, 1991). Factor analysis result evaluation and item elimination was repeated. Five iterations yielded a stable set of three factors, elimination of three items, and left a total of 14 items (see Appendix A for the items included). Factor analysis was run once more to determine whether the factor structure remained stable. Table 5 shows the stable factor matrix. These factors were labeled ‘‘ERP Product’’, ‘‘Contractor Service’’, and ‘‘Knowledge & Involvement’’, and explained 66.8% of the variance in the data set. Fig. 2 presents a model for measuring key-user satisfaction in an ERP environment.
Table 5 Stable results of factor analysis Item C1. Domain knowledge C2. Related experience C3. Project management C4. Technical competence C5. Training P1. Accuracy P2. Reliability P3. Response time P4. Completeness P5. System stability P6. Auditing and control P7. System integrity K1. Feeling of user involvement K2. System understanding Eigenvalue Proportion Cumulative a Coefficient ERP product 0.15 0.17 0.30 0.22 0.11 0.73 0.80 0.70 0.76 0.77 0.73 0.64 0.26 0.13 6.11 43.65 43.65 0.88 Contractor service 0.82 0.88 0.84 0.85 0.69 0.20 0.14 0.06 0.28 0.19 0.15 0.17 0.06 0.34 2.18 15.55 59.20 0.90 Knowledge and involvement 0.13 0.12 0.08 0.02 0.24 0.30 0.19 0.19 À.09 À.01 0.05 0.34 0.83 0.70 1.07 7.62 66.82 0.61

J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596
P1 P2 P3 P4 ERP Product P5 P6 P7 C1 Key-user Satisfaction in an ERP Environment Contractor Service C2 C3 C4 C5 Knowledge and Involvement K1 K2 Accuracy Reliability Response time Completeness System stability Auditing and control System integrity Domain knowledge of consultants/ suppliers Related experience of consultants/ suppliers Project management of consultants/ suppliers


Technical competence of consultants/ suppliers Training Feeling of user involvement System understanding

Fig. 2. A model for measuring key-user satisfaction in an ERP environment.

5.3. Reliability and validity CronbachÕs a-coefficient is used to measure the reliability. The reliability is deemed to be acceptable when a-coefficients exceed the 0.8 level. The 14-item instrument had a reliability at 0.90 a level. Content validity and criterion-related validity are used to measure the validity. Content validity refers to the representatives and comprehensiveness of the items used to create an instrument. Evaluating content validity involves judging each item for its presumed relevance. In our study, determination of user satisfaction was initially proposed based on a review of studies in the IS discipline. To ensure ERP user-satisfaction instrument completeness, items selected were taken from several sources, among them: major prior studies (Bailey & Pearson, 1983; Baroudi & Orilowski, 1988; Hirt & Swanson, 1999; Ives et al., 1983; Kumar & van Hillegersberg, 2000; Sengupta & Zviran, 1997), examination of ERP system characteristics, and from personal interviews conducted by the researchers. Thirty key-users from five companies were also asked to examine the importance and relevance of each item in the instrument to user satisfaction. This rigorous approach tends to lend credence to our claim of content validity.


J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596

Criterion-related validity is determined by comparing item scores with at least one criterion known or believed to measure the construct under study. Criterion-related validity in this study was determined by the correlation between the sum of scores on all items in the instrument and the measures of valid criterion (perceived overall satisfaction and perceived system success). The correlations are 0.77 and 0.68, significant at 0.001 ensuring acceptable criterion-related validity.

6. Confirmatory analysis Fig. 2 shows that this research proposes a secondary-order factor model of ERP key-user satisfaction that consists of three first-order factors measured by 14 items. This model hypothesized that the three first-order factors were correlated and the correlations are statistically caused by the second-order factor. The LISREL 8.30 program was then used to test the model fit. 6.1. Model-data fit Because no one statistic was universally accepted as an index of model adequacy, several measures were used. The ratio of v2 to the degrees of freedom (i.e., v2/df), goodness-of-fit (GFI), root mean square residual (RMSR), adjusted goodness-of-fit index (AGFI), normed fit index (NFI), non-normed fit index (NNFI), comparative fit index (CFI), and incremental fit index (IFI) were used to evaluate the model fit. The following criteria of the goodness-of-fit indices were used to assess model fit: 2 < v2/df < 5, GFI > 0.85, RMR < 0.05, AGFI > 0.85, NFI > 0.80, NNFI > 0.80, and CFI approach 1 (Hadjistavropoulos, Frombach, & Asmundson, 1999; Hair, Anderson, Tatham, & Black, 1995). The value of CFI varies between 0 and 1.0, with higher values indicating greater model parsimony. As shown in Table 6, the hypothesized model exhibits good levels of fit and thus provides a satisfactory representation of the underlying structure of the measure for ERP key-user satisfaction. 6.2. Convergent validity and discriminant validity Convergent validity refers to the proximity of the results of different approaches to the same problem. This is exampled by using the Bentler-Bonett normed fit index (NFI). A scale with NFI of 0.90 or above shows strong convergent validity (Mak & Sockel, 2001). The measurement model with NFI values above 0.90 has strong
Table 6 Goodness-of-fit indices for research model Fit indices Outcome Criteria v2/df 1.87 2–5 GFI 0.91 >0.85 RMSR 0.05 <0.05 AGFI 0.87 >0.80 NFI 0.91 >0.80 NNFI 0.94 >0.80 CFI 0.95 Approaching 1

J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596 Table 7 The square roots of AVEs and correlations among latent constructsa Factor ERP product Contractor service Knowledge and involvement


ERP product 0.72 0.45 0.45

Contractor service 0.81 0.43

Knowledge and involvement


The square roots of AVE on the diagonal (italic parts), correlation coefficients on the anti-diagonal.

convergent validity. Discriminant validity refers to whether the item in one scale is distinguished from a construct of another scale. Fornall and Larcker (1981) suggest that the squared root of average variance extracted (AVE) for a construct should be substantially higher than the correlation coefficient between that construct and all other constructs. Table 7 provides the evidence of discriminant validity. The highest correlation coefficient was observed between ‘‘ERP Product’’ and ‘‘Knowledge and Involvement’’ and it was 0.450. This was significantly lower than their square roots of AVEs. The square roots of AVEs for the latent variables were 0.72 and 0.67, respectively.

7. Conclusions and implications This article represents significant progress toward development of a measure of key-user satisfaction in ERP environments. Building upon several previously validated instruments, the instrument merges the factors of ERP product, knowledge and involvement, and contractor service. The instrument appears to have adequate reliability and validity. The measurement tool developed here can contribute to a better understanding of the specific aspects of ERP systems in organizations. This is becoming increasingly important as more and more organizations implement ERP systems since empirical research in this environment is presently lacking. In sum, the contribution of this research is fourfold: First, this research offers a proof-by-analysis that ERP key-user satisfaction is closely related to perceived system success. Practitioners or researchers can employ user satisfaction as a measure of system success in an ERP environment. Second, this research identifies that the key-userÕs satisfaction evaluation for ERP system is multidimensional constructs (i.e., ERP product, contractor service, and knowledge and involvement). The three factors are interwoven, and one must not focus exclusive on any single factor in assessing overall ERP success. The results enhance our understanding of the nature and dimensionality of the key-user satisfaction for ERP system. Third, this research further provides some implications for implementing and managing ERP systems. ERP vendors, consultants and IS managers should pay attention not only to improve the quality of ERP products, but also to improve userÕs knowledge and involvement and to select suitable consultants and suppliers. Finally, this instrument can be utilized as a diagnostic tool. Practitioners can use


J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596

the instrument to assess and analyze what aspects of their ERP systems are most problematical. Practitioners can compare satisfaction levels of their ERP systems with the expected levels to understand relative goodness or badness and to take necessary corrective actions to improve them. Future research, in different samples and longitudinal studies, are necessary. The validity of a measure cannot be truly established on the basis of a single study. Validation of measure requires the assessment of measurement properties over a variety of samples in similar and different context. In addition, future research is needed to develop standard norms for evaluating specific ERP modules, to investigate the relative importance of the determinants of ERP key-user satisfaction, and to realize the relationship between ERP implementation stage and user satisfaction.

Acknowledgements This research was supported by the National Science Council of Taiwan under the Grants NSC 93-2416-H-110-015 and NSC94-2416-H-025-002.

Appendix A. Measures of key-user satisfaction in an ERP environment C1. Domain knowledge of consultants/suppliers: The domain knowledge and expertise exhibited by the ERP consultants. C2. Related experience of consultants/suppliers: The project experience and consulting expertise exhibited by the ERP consultants. C3. Project management of consultants/suppliers: The ability of the ERP consultants in ERP project management. C4. Technical competence of consultants/suppliers: The information technology skills and enterprise system expertise exhibited by the ERP consultants. C5. Training: The amount and quality of specialized instruction and practice that is afforded to the user to increase the userÕs proficiency in ERP usage. P1. Accuracy: The correctness of the output information provided by ERP. P2. Reliability: The consistency and dependency of the output information provided by the ERP system. P3. Response time: The elapsed time between a user-initiated request for ERP service and a reply from the ERP system. P4. Completeness: The comprehensiveness of the output information content provided by the ERP system. P5. System stability: The robustness of the ERP system. P6. Auditing and control: The type and quality of auditing and inspection rendered by the ERP system. P7. System integrity: The capacity of the ERP system to communicate data with other systems servicing different functional areas, located in different geographical zones, or working for other business partners.

J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596


K1. Feeling of user involvement: The degree to which the user perceives that using the ERP system is important and self-relevant. K2. System understanding: The degree of comprehension that the user possesses about the ERP system and functions that are provided.

Bailey, J., & Pearson, S. (1983). Development of a tool for measuring and analyzing computer user satisfaction. Management Science, 25(5), 530–545. Baroudi, J. J., & Orilowski, W. (1988). A short-form measure of user information satisfaction. Journal of Management Information Systems, 4(4), 45–59. Calisir, F., & Calisir, F. (2004). The relation of interface usability characteristics, perceived usefulness, and perceived ease of use to end-user satisfaction with enterprise resource planning (ERP) systems. Computers in Human Behavior, 20(4), 505–515. Churchill, G. A. (1979). A paradigm for developing better measures of marketing constructs. Journal of Marketing Research, 16, 64–73. DeLone, W. H., & McLean, E. R. (1992). Information systems success: the quest for the dependent variable. Information Systems Research, 3(1), 60–95. Doll, W. T., & Torkzaden, G. (1988). The measurement of end-user computing satisfaction. MIS Quarterly, 12(2), 259–271. Doll, W. T., Raghunathan, T. S., Lim, J. S., & Gupta, Y. P. (1995). A confirmatory factor analysis of the user information satisfaction instrument. Information Systems Research, 6(2), 177–188. Fornall, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18, 39–50. Galletta, D. F., & Lederer, A. L. (1983). Some cautions on the measurement of user information satisfaction. Decision Sciences, 20(3), 419–438. Hadjistavropoulos, H. D., Frombach, I. K., & Asmundson, G. J. G. (1999). Exploratory and confirmatory factor analytic investigations of the illness attitudes scale in a nonclinical sample. Behaviour Research and Therapy, 37, 671–684. Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. (1995). Multivariate data analysis with readings (5th ed.). Englewood Cliffs, NJ: Prentice-Hall. Harrison, A. W., & Rainer, R. K. (1996). A general measure of user computing satisfaction. Computers in Human Behavior, 12(1), 79–92. Hirt, S. G. & Swanson, E. B. (1999). Maintaining ERP: Rethinking relational foundations. The Anderson School at UCLA, working paper. Huang, J.-H., Yang, C., Jin, B.-H., & Chiu, H. (2004). Measuring satisfaction with business-to-employee systems. Computers in Human Behavior, 20(1), 17–35. Igbaria, M., & Nachman, S. A. (1990). Correlations of user satisfaction with end user computing: an exploratory study. Information & Management, 19, 73–82. Ives, B., Olson, M. H., & Baroudi, J. J. (1983). The measurement of user information satisfaction. Communications of the ACM, 26(10), 785–793. Kumar, K., & van Hillegersberg, J. (2000). ERP experiences and evolution. Communications of the ACM, 43(4), 23–26. Mak, B. L., & Sockel, H. (2001). A confirmatory factor analysis of IS employee motivation and retention. Information & Management, 38, 265–276. Sengupta, K., & Zviran, M. (1997). Measuring user satisfaction in an outsourcing environment. IEEE Transactions on Engineering Management, 44(4), 414–421. Sethi, V., & King, W. R. (1991). Construct measurement in information systems research: an illustration in strategic systems. Decision Sciences, 22, 455–472.


J.-H. Wu, Y.-M. Wang / Computers in Human Behavior 23 (2007) 1582–1596 Jen-Her Wu is Professor of Information Management and Director of Institute of Health Care Management at National Sun Yat-Sen University. He has published two books (Systems Analysis and Design, Object Oriented Analysis and Design) and more than 40 papers in professional journals such as Computers in Human Behavior, Information & Management, Decision Support Systems, International Journal of Human-Computer Interaction, International Journal of Technology Management, Expert Systems, Knowledge Acquisition, Journal of Computer Information Systems, and others. His current research interests include various aspects of information systems development and management, human computer interaction, and knowledge management. Yu-Min Wang is an assistant professor of Information Management at National Chi Nan University. He has over 6 years of IT experiences in various organizations and positions. His research interests include information technology adoption and implementation, human computer interaction, ERP, and knowledge management. His works have been published in several journals and conference proceedings.

Sign up to vote on this title
UsefulNot useful