This action might not be possible to undo. Are you sure you want to continue?
Information technology evaluation: issues and challenges
Govindan Marthandan and Chun Meng Tang
Faculty of Management, Multimedia University, Cyberjaya, Malaysia
Purpose – To justify an increase in information technology (IT) spending and to understand utilization of limited organizational resources on IT, the correlation between IT and business performance has been of great interest to business managers. However, business managers face issues and challenges in finding out how and to what extent IT is able to deliver the intended benefits. The purpose of this paper is to examine IT evaluation issues and challenges faced by information systems (IS) researchers, IS specialists, and business managers. Design/methodology/approach – This paper begins by reviewing the disparate discussions in past literature on IT evaluation issues and challenges. It then provides a synthesis of the disparate discussions by identifying eight issues and challenges in IT evaluation. Findings – The eight issues and challenges identified are: evaluation scope, evaluation timing, unit of analysis, level of analysis, different perspectives, different dimensions, different measures, and underpinning theoretical frameworks. It concludes with some suggestions on ways to improve IT evaluation practices. Originality/value – This paper posits that before a pragmatic IT evaluation approach can be developed, it is necessary to first understand the issues and challenges faced by IS researchers, IS specialists, and business managers in IT evaluation. Having identified the eight issues and challenges, this paper provides pointers on what needs to be considered when conducting IT evaluation. Keywords Communication technologies, Decision making, Value analysis Paper type General review
Information technology evaluation 37
1. Introduction As information technology (IT) advances dramatically with new features and capabilities, moving away from the data processing era to a strategic information systems (IS) era, efficiency measures are no longer sufficient to illustrate the true business value of IT. Over the years, expectations of business managers on IT value have gradually shifted from operational-centered to strategic-focused. How fast a processor runs or how many pages a printer prints might not interest business managers. Instead, business managers are interested in the strategic advantages that IT brings. As highlighted by Seddon et al. (2002), IT evaluation is slowly shifting from the technical or financial perspective to a business-oriented one. Thus, it is many times more difficult to evaluate IT now than in the past as we are looking at not only tangible but also intangible benefits. Traditionally, business managers justify their IT spending with the help of financial techniques, e.g. payback period, net present value, and internal rate of return, that are common for evaluation of capital investments. However, the effectiveness of financial techniques in evaluating IT investments is debatable as IT investments are different from other capital investments – especially when we cannot estimate hidden, intangible, and non-financial benefits (Ballantine and Stray, 1998; Irani and Love, 2000/ 2001). For example, return on investment (ROI) calculation can be negative for a threshold IT investment that is critical for the survival of an organization. Strategic IT
The authors would like to thank the anonymous reviewers and the Editor for the valuable comments which have helped us to improve the quality of the paper.
Journal of Systems and Information Technology Vol. 12 No. 1, 2010 pp. 37-55 # Emerald Group Publishing Limited 1328-7265 DOI 10.1108/13287261011032643
The difficulties. i. 1997. and business managers have been reported in past studies. Irani et al. However. a newer approach like balanced scorecard might be a better choice for evaluation purposes. and business managers in IT evaluation. a strategic IT implementation requires complementary organizational and business changes. informational. Irani. He reported that transactional IT contributed significantly to better firm performance. but not in the case of informational IT. scattered. However. past studies only assessed a particular stage. the use of traditional evaluation techniques can lead to wrong investment decisions (Anandarajan and Wen. past studies in general did not discuss business implications of IS evaluation. having reviewed some 298 articles presented at the European Conference on IT Evaluation. they suggest that the reasons behind lukewarm support among IS practitioners for IS evaluation should be explored further. He concluded that different types of IT investment served different management objectives and thus exhibited different effects on firm performance. Lubbe and Remenyi. . traditional evaluation techniques can be inappropriate. identifying qualitative. and transactional. Irani et al.JSIT 12. The different objectives of IT investments further complicate IT evaluation. from initiation to post implementation. this paper follows the pointers provided by Webster and Watson (2002) on writing a review paper. and in most cases. it is necessary to first understand the issues and challenges faced by IS researchers. i. for a project that aims to exploit IT for strategic benefits. e. IS specialists. business processes redesign. 2002. This paper is motivated to fill in this knowledge gap. problems. strategic. issues. In order to meet this objective. This paper posits that before a pragmatic IT evaluation framework can be developed. which in turn makes it difficult to segregate IT value from the total value (Brynjolfsson and Hitt. Weill (1992) categorized IT investments in terms of management objectives. However. For example. Law and Ngai. then traditional evaluation techniques are sufficient. This could be because strategic advantages of strategic IT would be eroded once competitors have emulated it. Although organizations are keen to determine the relationship between IT investment and organizational performance. lack depth. 1989). Berghout and Remenyi (2005). 1999. 2005.g.e. 1999)..e. IS evaluation should be conducted in all stages of IS development life cycle. Third. Stressing the importance of IT evaluation. Although a daunting challenge. in a study of IT performance in 33 small-to-medium-sized valve manufacturing firms. Also. (2006) reason that if the investment objective of an IT project is to reduce operational costs. Leem and Kim (2004) claim that past IS evaluation studies suffer from several limitations. (2002) explain that as organizations are searching for long-term strategic benefits rather than shortterm operational benefits. although past studies have been working hard on performance measures and IS-performance relationship. conclude that future research should try to build a theoretical groundwork of IT evaluation with strong core concepts and better evaluation methodologies. the discussions are disparate.1 38 investments cannot be evaluated using tangible criteria alone. and challenges faced by IS researchers. or hidden value poses a major challenge to business managers. the assessment of IT investment returns is difficult and organizations do not have adequate evaluation frameworks for that purpose. IS specialists. 2000). Irani et al. strategic IT did not prove its worth over a longer term. there is a strong need to develop underpinning theoretical frameworks for better evaluation of IT investment. In strategic IT investment evaluation. Second. However. First. there can be other intangible benefits (Weill and Olson. intangible. they have not been successful in identifying the best evaluation approach. Without considering intangible benefits.
and challenges were to be retained. benefit. The eight issues and challenges are: evaluation scope. the different perspectives of stakeholders complicate IT evaluation by introducing a diverse range of dimensions and measures (Agourram. level of analysis. In assessing IS effectiveness. Chou et al. stakeholder perspectives. IS personnel. problems. (1998) describe that to measure executive IS success is a difficult task due to its multidimensionality and the different viewpoints of the evaluators. b) rationalize how stakeholder differences can influence IS evaluation objectives and measures. It was decided that only articles that had a specific section about IS evaluation difficulties. Next. effectiveness. the different perspectives of individual stakeholders. Articles that quoted other articles heavily were also not included. e. and ScienceDirect. Instead. and underpinning theoretical frameworks. evaluation timing. the search was conducted using each of these terms together with each of the following keywords: success. implementation. the abstracts should indicate a context of IS evaluation. (1996) raise a question about the perspective from which IS effectiveness is judged. each of the eight issues and challenges is discussed. unit of analysis. there are multiple stakeholders in evaluating ERP success. 2005). management. only full-text articles were considered as they provided detailed information necessary for further analysis. To retrieve the maximum number of abstracts. Grover et al. different measures. As literature frequently used the terms information systems and information technology interchangeably. e. (2001) share the same opinion that post-implementation evaluation in general faces disparity in the areas of stakeholder Information technology evaluation 39 . Thus. value. were not included. In the second round of review. as different stakeholders would have different views about IS effectiveness. Bajwa et al. i. Klecun and Cornford. EBSCOhost. no restriction was imposed on publication year. different perspectives. the original articles were referred to. duplicate copies were deleted. different dimensions. Hamilton and Chervany (1981a. in three major online research databases.. Some abstracts that did not provide enough details for a deletion decision were retained for the second round of review. To be retained for the next stage of review. issues. design. Bernroider (2008) adds that enterprise resource planning (ERP) success is multi-dimensional. first a review of the selected articles is provided. and use of IT. as at May 2009. it is important to first define the perspective explicitly. This paper concludes with some suggestions on how to improve IT evaluation practices. Thus. performance. 2009. and then the eight IT evaluation issues and challenges are identified. and evaluation measures still remain a central issue. 1996. Articles that merely mentioned the reasoning behind the research design of an empirical study. In the sections that follow. 2. evaluation dimensions. As multiple stakeholders are involved in the planning. productivity. payoff. Palvia et al. However.e. ProQuest. users. can be considered. Hakkinen and Hilmola (2008) observe that different perspectives and different dimensions have added complexity to IS evaluation – a reason why a one-for-all evaluation framework is not available. For example. 2006. and frequently. reasons to focus on a particular stakeholder group or the use of quantitative or qualitative measures. and external stakeholders. In the first round of review.g. the full text of individual articles were then downloaded and read.Preliminary selection of articles was conducted by retrieving abstracts of relevant articles. Some articles were cited in more than one of the three online research databases. and efficiency. Jurison. Issues and challenges In ex-ante or ex-post IT evaluation.g. the abstracts were read carefully to decide which articles were to be retained for the next stage of review. evaluation.
2003). (2006) highlight several issues related to measures. Without defining the evaluation scope. inconsistent definition of IT and disparate types of IT make scope definition a challenge (Palvia et al. However. several weaknesses of past IS success studies are related to stakeholder perspectives. Fourth. 1992). where there are similarities between the two... 1999). In addition. Mukhopadhyay et al. Weill. These studies relied heavily on financial measures and failed to consider intangible benefits. business.1 40 perspectives and evaluation dimensions. (2001) also highlight the problems of attributing causality and isolating the effects derived from an IS implementation. Devaraj and Kohli (2003) suggest that one of the difficulties in establishing the link between IT and organizational performance can be attributed to the summative nature of IT impact. i. not only financial (Irani et al. many studies did not explain their choice of constructs and measures. 2005). As explained by Hamilton and Chervany (1981a). relevant measures can then be identified (Grover et al.. evaluation dimensions. 1981a). and technology. Love et al. in measuring IS success. having too many measures tends to obscure meaningful comparison across studies and creates confusion about the meaning of each measure.. although IS involve multiple stakeholders. Evaluation objectives have also implications for evaluation scope. 2009). 2006. there are implications for the evaluation dimensions and measures. 1996). Qualitative measures have not received enough attention in IT evaluation.JSIT 12. and evaluation measures. First. (2008). Third. it is not that straightforward to define IS success (Agourram. Second. there is the issue of subjectivity due to the different opinions of the evaluators (Anandarajan and Wen. . no common definition of organizational performance measures and metrics. no clear distinction among strategic. Given the different types of IS and the difficulty in separating IS from work system. 1996. Mirani and Lederer (1998) recognize that there is no single best approach to measure organizational benefits of IS projects. (2001) comment that. Relating IS effectiveness to organizational effectiveness.. evaluation scope can be difficult to define when IT is part of the organizational change (Klecun and Cornford. 1992). 2005).g. e. the validity of the constructs seemed questionable. user. Having too many dimensions in turn introduces a large number of diverse measures which complicates IT evaluation further (Devaraj and Kohli. 2001. There are always multiple evaluation dimensions to consider. The scope of evaluation brings another major issue. The use of quantitative and qualitative measures has been a concern (Hamilton and Chervany. and the difficulty in identifying and measuring intangible and non-financial measures.e. Last. the completeness of the evaluation model also seemed questionable. It would be difficult as IS effectiveness involves several aspects. Weill. This makes it very difficult to pinpoint exactly the impact of individual technology. 2000. The scope can cover a specific system or a total organizational systems portfolio. Gunasekaran et al. with just a few selected constructs. When qualitative measures are used. Jurison. 2004. Skok et al. past studies focused on only one perspective. They add that during the systems development cycle. unclear evaluation objectives and measures contribute to the difficulty in evaluating IS effectiveness. Skok et al. By defining the scope. As identified by Gable et al. There is the question of scale validity and reliability (Devaraj and Kohli. past studies used outdated evaluation instruments and measures that did not reflect contemporary technology development. external environment or organizational learning processes can cause evaluation objectives and measures to change. tactical and operational performance measures and metrics. they identify the issues of different perspectives and different dimensions in evaluating IS effectiveness. 1995.
Time lag (Jurison. and measures still remain a central issue in IT evaluation. Mohd.. When aggregated.g. underpinning theoretical frameworks. There is also a lack of validated IS evaluation models. Besides unit of analysis. IS practitioners. there is the issue of availability of underpinning theoretical frameworks and models (Agourram. past literature has provided clues to the IT evaluation issues and challenges faced by IS researchers. 2008). 2003). dimensions. who – stakeholder perspectives. e. they are: different perspectives. an IS evaluation framework should consider three factors. Hakkinen and Hilmola. 1992) could have accounted for the differences in past studies. e. 2008. an IS evaluation should not just cover the financial and technical aspects. 1992). organizations have different definitions of IT. what – evaluation dimension. 1996). Weill. 2009. context. business unit.. It is evident that evaluation perspectives. system. 1995). when – a point in time when the evaluation is conducted. Last but not least. the benefits gained from a well-designed system may be offset by a poorlydesigned system (Mukhopadhyay et al. 1996) and cross-sectional data (Weill. Evaluation timing is among those highlighted (Bernroider..Past literature has also provided other reasons why previous studies have produced conflicting findings about IT benefits or have not been able to illustrate affirmatively the relationship between IT investment and firm performance..1 Evaluation scope One limitation of previous IT business value studies is the treatment of IT as an aggregate system where different types of systems are grouped together. e. 2. 2008.. Mukhopadhyay et al. There is also a need to specify the unit of analysis. and micro and macro environments of the organization. Table I provides a summary of the IT evaluation issues and challenges. As explained by Stockdale et al. tools. It is essential to identify the level of analysis. 2000. Mirani and Lederer. level of analysis needs particular attention as well (Devaraj and Kohli. highlighting that an evaluation framework should involve the aspects of technology. and business managers. (2006) and Stockdale and Standing (2006). Such a framework allows a comprehensive evaluation to answer the questions of why – evaluation objectives. 1998). human.g. Each of the IT evaluation issues and challenges is discussed next. strategic business unit. evaluation timing.. For example. longitudinal data. and organization. and techniques. (2006) specifically point out that there are no theoretical frameworks available to measure impact of IT on organizational performance or to help guide selection of evaluation tools or techniques. 1995. firm. 2001. Skok et al. To conclude. organization. different measures. and process. Differences in research design. time frame. Devaraj and Kohli. stakeholders.g. individual. Process refers to the execution of the evaluation. level of analysis. 2003. but also the social aspect. Gunasekaran et al. but the definitions tend to cover everything about IT. or industry. (2008) share a similar point of view. To this end. Weill. Yusof et al. 1992). Gable et al. A broad Information technology evaluation 41 . can also make comparison across studies difficult (Devaraj and Kohli. and nation (Jurison. and unit of analysis. 2000. e. and how – evaluation methods. management is interested in understanding how IS contributes to organizational performance. snapshot vs. Context refers to the evaluation purposes. Palvia et al. Different units of analysis have implications for evaluation measures. content. i. Content refers to the evaluation subject and relevant evaluation criteria. In order of frequency. different dimensions. 1996.g.e. individual. evaluation scope. whereas users are more interested in IS use and user satisfaction (Grover et al. that the evaluation measures target (Devaraj and Kohli. In general. 2001).
the focus can be on the IS department and a specific IS.g. 1999). (2002) segregated three types of IT: overall IT portfolio. (2000) argue that evaluation should not be for the entire IS applications portfolio.JSIT 12. (2006) Weill (1992) 1 p 2 3 Issues and challenges 4 5 6 p p p p p p p p p p p p p p p p p p p p 7 p 8 p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 42 p p p p Table I.. (2006) Devaraj and Kohli (2000) Devaraj and Kohli (2003) Gable et al. 3 – unit of analysis. e. 7 – different measures. the scope of IT should be defined specifically. (2006) Hakkinen and Hilmola (2008) Hamilton and Chervany (1981a) Hamilton and Chervany (1981b) Irani et al. (1998) Bernroider (2008) Chou et al. individual application project. Following that. hardware. 2003). Involving disparate systems in an IS performance study makes comparison across studies difficult (Cronholm and Goldkuhl. 5 – different perspectives. it should be for a specific IS application for one simple reason – benefits gained from a specific IS application are more precise and detailed than a generic set of benefits. (2008) Grover et al. it is necessary to first define what to evaluate before deciding how to evaluate. 6 – different dimensions. To support their argument. (2006) Jurison (1996) Klecun and Cornford (2005) Mirani and Lederer (1998) Mohd.1 Authors Agourram (2009) Anandarajan and Wen (1999) Bajwa et al. Some researchers did that. Pitt et al. Instead. 2 – evaluation timing. Yusof et al. in a survey about IT evaluation approaches among 80 senior IT managers in medium to large European and US organizations. (1996) Gunasekaran et al. (2001) Stockdale et al. as IT covers a broad range of elements. Oz (2005) agrees that in studying the value of IT. telecommunications. Seddon et al. (2001) Skok et al. (2003) suggest having clear evaluation objectives to help define the evaluation scope. (1995) Palvia et al. Hamilton and Chervany (1981a) and Ammenwerth et al. The type of systems under investigation must be specified in order to make a meaningful assessment (Seddon et al. 8 – underpinning theoretical frameworks definition causes measurement problem (Weill and Olson. 1989). only relevant measures can be identified to ensure that the right thing is measured. (2008) Mukhopadhyay et al. 4 – level of analysis. (1995) suggest that to measure user perception of service quality of IS function. Ragowsky et al. they conducted a . To perform an effective evaluation. For example. and IT function. and people. Summary of IT evaluation issues and challenges Note: 1 – evaluation scope. software.
Considering this. Each level requires different evaluation measures. for a comprehensive. business process. and business-dependent. user-dependent. Grover and Segars (1996) operationalize IS success with firm and market-level measures. Melville et al.g. e. business process and individual. they recommend that future research should not just study the average effect of IT. the potential value would be in the best interest of the organization. (2004) report that IT contributes to both business process performance and organizational performance. user. Business-dependent level refers to the value added by IS-business alignment. Cronk and Fitzgerald (1999) stress that to explain a complex construct like IS business value. the linkage between IT investment and organizational performance could be best demonstrated at strategic business unit level. Davern and Kauffman (2000) support the notion that potential value exists at different levels of analysis. organizational. i. downtime.4 Level of analysis Potential value of IT can happen at system. Organizational performance refers to the total impact of IT on organizations. accuracy. and organization.g. and nation levels.g. recommendations can then be made on how to realize them. conflicts among stakeholders could happen. three levels of IS business value can be considered: systemdependent. By identifying first the potential value of an IT investment. Searching for an answer. business unit.g. As there are multiple levels of analysis. apart from end user. User-dependent level refers to the value added by user characteristics. Kim et al. Instead. and strategic. Weill and Olson (1989) advise that IT investment might not be ideally assessed at organizational level. However. in a study to examine how IT infrastructure and organizational contextual factors affect the success of IS implementation. group. e. Performing an analysis across different levels of analysis would provide a comprehensive picture and thus minimize conflicts. Tangpong (2008) reckons that IT impact should be examined at organizational level because organizations are expecting to see return on their IT investment. integrated IT evaluation model. organizational. (1999) suggest that. business goals realization. With this approach. semantic quality. Kohli and Grover (2008) agree that IT value should be examined at organizational or inter-organizational level. covering different perspectives and stakeholders. and timeliness.JSIT 12. Business process performance refers to operational efficiency enhancement associated with specific business processes. For example. departmental. Instead. 2.e. Shayo et al. industrial studies help provide more details. they adopted a process-oriented methodology to examine the impact of IT on small business units (SBUs) at both 44 . e. For example. and characteristics to understand how IT can be exploited for greater IT value. System-dependent level refers to the value added by systems characteristics. user. Noting that past studies have examined IT impact at different levels of analysis. response time. With the identification of potential value. (1999) mention that system value can be examined at four different levels: system. the extent of value realization can then be measured. Barua et al. Some researchers propose that IT benefits are realized in stages at different organizational levels. but also firm strategy. (1995) suggest that the key to answering the IT value question lies with the measurement of value. user skills and attitudes. They propose that measures representing different aspects of performance can be carefully selected and matched with the business objectives of each type of IT investment. instead of individual stakeholders. best practices.1 that national aggregate data does not help reveal details about IT value. we should also look at different levels of analysis. firm. economy. e.
interviewed over 120 respondents of different organizational levels. Among the four. evaluation outcomes can be totally different from systems developers to systems users. (2001) surveyed three stakeholder groups.e. and expectation failures. market share. and at the same time. users and managers. marketing. there is a need to consider multiple viewpoints of evaluation objectives and measures. evaluators. They concluded that IT impacts seen at the lower levels of an organization would escalate into the higher levels. Fearon and Philip (1999). IT had effects on operational level variables. 2003). interaction. In IS effectiveness evaluation. One of the problems points to the definition of value which differs individually from one person to another as there are different individual opinions. Measurement of value very often involves different organizational groups. Kanungo et al. inventory turnover. Palvia et al. (2002) note that using a dissimilar set of criteria. distributors and suppliers. In a study of IS quality of expert systems. users. and background. in assessing IS business value.g. 2000). A survey instrument was designed to include 130 variables and collect data from 60 strategic business units of 20 large US and Western European corporations. process. and innovation. (1999). and IT specialists. 30-category IS effectiveness matrix which includes different combinations of stakeholders and systems. For example. i. although sometimes difficult to define. but failing to meet the expectations of a particular stakeholder group is a common cause of IS failure. To conduct an effective assessment of system effectiveness. Lyytinen and Hirschheim (1987) reckon that different stakeholders have different value set. in the evaluation and decision-making process. examined performance of EDI from the perspectives of retailers. IS manager. Seddon et al. and user. the CEO. The stakeholders are not only involved in the initial decision-making stage but also in the design stage to discuss how the system is to be designed.e. they share differences and similarities. (1999). A common definition of IT value and how it can be measured must be reached among the stakeholders before an evaluation can be successful (Bannister and Remenyi. in a study to examine factors leading to IS effectiveness. Information technology evaluation 45 .intermediate and higher levels. Hamilton and Chervany (1981b) highlight that the differences in perspectives among different functional groups can lead to conflicts. it is common to find research studies involving multiple stakeholders. it is necessary to first define the perspective. views. in a study of the Irish supermarket retail sector. given its multidimensional and multi-stakeholders nature. suggesting that evaluation of IS can be performed from three different perspectives: users. sales and distribution. e. Different stakeholders have different perspectives about benefits and risks. Recognizing that different stakeholders and systems settings will call for different measures in assessing IS success. management. i. users. McAulay et al. Subjective perception and different group experiences can influence results of IS evaluation. developers. They identify four types of IS failure: correspondence. Analysis was performed for three specific functions: production. An evaluation could involve such stakeholders as initiators. and management. e. expectation failure is believed to have contributed the most to IS failure. 1996).g. which in turn affected higher-level variables.g.5 Different perspectives The attempt to measure IS business value is a complicated issue. e. and interest parties (Serafeimidis and Smithson. Remenyi and Sherwood-Smith (1999) describe an evaluation approach which involves the participation of key stakeholders. Thus. Individual perspectives would subsequently affect the choice of relevant measures. 2. and each group has its own perception of value (Davern and Kauffman. propose a two-dimensional.
in a study to compare performance measurement practices between Australian manufacturing and service industries. Palvia et al.. Other dimensions have been identified in past literature. and productivity in terms of systems efficiency. management. Sohal et al. (2002) propose an approach to evaluate new IT projects along four dimensions: strategic. He suggests using operational performance measures rather than financial measures.6 Different dimensions Traditional evaluation approaches do not take into account the multi-dimensional aspect of performance and focus mainly on financial measures. 1996). Although financial measures address the concerns of shareholders. informational. quality in terms of customer satisfaction. Treating system use as a multidimensional construct. and quality principles. represented by measures from the usage. and customer relations. and information flexibility.e. and transactional. productivity. Informational refers to business effectiveness and can be quantified in financial terms. (2001). A multi-dimensional approach to measuring IS success. i. 2. problem solving. Eskow (1990) proposes that measures of IT value can be considered from three different aspects: performance in terms of project goal fulfillment. value of products or services. operational. alignment. vertical integration. is better than a unidimensional approach (Rai et al. In assessing organizational benefits of IS.e. users. and financial dimensions. they fail to consider the internal and external stakeholders (Brignall and Ballantine. In a study of IS effectiveness in an extended supply 46 . employees. managers. (1992) observed two types of system impacts. Informational subdimensions were information access. Transformational refers to the effects on business performance. horizontal integration. pointing out that a comprehensive evaluation of IS success should include measures from different dimensions. Automational refers to efficiency benefits that can be quantified easily.JSIT 12. Strategic subdimensions identified were competitive advantage. In a study of a system in Federal Express Corporation. IS success should be evaluated as a multidimensional construct. Merely one or two variables are not good enough. With the 33 benefits identified from past studies. satisfaction and decision performance dimensions (Arnold. and systems developers. tactical. automational. Marsh and Flanagan (2000) propose a similar framework to consider IT benefits of three dimensions. informational. Cronk and Fitzgerald (1999) suggest starting with broad generic business value measures. i. 1995). the system also delivered specific organizational benefits in four areas: personnel division. they conducted an empirical study involving 178 projects to further classify the benefits into three subdimensions within each dimension. Doll and Torkzadeh (1998) propose five dimensions of system use. Transactional subdimensions were communications efficiency. allowing the IS context in question to decide on the final measures later. 2002). Mirani and Lederer (1998) classify benefits into three categories in terms of the organizational objectives each IS serves: strategic. Irani et al. and business efficiency. grouped 16 measures into five categories: scarcity of stock.1 i. costs. systems development efficiency. In addition to long-term strategic benefits. Knowing that it is difficult to have all perspectives considered. information quality. This notion of multi-dimension is also supported by DeLone and McLean (1992) in their classic article on IS success. Scott (1995) argues that a causal model would better reflect the multi-dimensional nature of IS effectiveness. and extra-organizational relationships. and transformational. and customer service. which takes into account the interdependencies among measures.e. decision rationalization.
service. Weill (1992) also emphasize that single measure of IS success Information technology evaluation 47 . which included the one hundred firms in the 1989 Computerworld Premier 100 list. and stock price. Instead. (2001) support the notion that IS quality would be a good surrogate measure for IS success. In view of the issues encountered in post-implementation IS evaluation. learning. they introduced a quality assessment instrument which consists of 39 variables to measure IS quality of expert systems in 22 US insurance companies. Kraemer et al. As cost benefit analysis does not consider intangible dimensions. They then calculated the weighted average of each performance measure to derive a single figure denoting the performance of IS. infrastructure support and decision-making support. This has prompted business managers to search for better evaluation techniques. Premkumar and King (1992) asked respondents to first choose five performance measures they considered important to the organization and then indicate the extent of contribution of IS to each of the five performance measures. Palvia et al. the IS service quality dimension is equally important. to justify why organizations should make investment in IT (Farbey et al. and customer satisfaction. and technical. 2004). they would tend to perceive the system successful. from the strategic to operational levels. If we are able to quantify intangible value. IS function does not just deliver products but also provide services. 1992). human and organizational aspects. but the service aspect is largely ignored. connectivity and data storage. was the first to provide such evidence. sales revenue. They propose that Delone and McLean’s IS success model be expanded to include the IS service quality dimension. To evaluate IT infrastructure investment. financial assets. Adopting a socio-technical approach and drawing evaluation measures from the technological. market share. decision quality. in a study on the perceived usefulness of computer-based information. Willcocks and Lester (1996) propose that evaluation measures be linked to business benefits. multiple measures were needed. Edwards et al.7 Different measures Researchers have tried different evaluation measures. (2001) used three groups of measures: operational support. financial techniques might not give the full picture as intangible benefits such as flexibility could be ignored.. 2. They claimed that their study. 1999).g.chain. Commenting that past literature has not satisfactorily linked IT investment to organizational strategic and economic performance. profit performance. covering multiple aspects such as financial. (1995) comment that the product aspect of IS function has been receiving great attention from IS researchers. In measuring IS effectiveness. consideration can be given to three dimensions: usage. involving 260 public managers of 46 US city governments. it is inappropriate for assessing strategic IT investments. The top five performance measures reported were ROI. concluded that perceived usefulness continued to serve as a good surrogate measure of IS success.g. operating efficiency. then we would be able to justify a strategic IT investment rather easily (Beamon. When users find the system satisfactory. They reported that a single measure was not good enough to represent the strategic and economic performance of an organization. Recognizing that different companies may have preferences over performance measures. Pitt et al. e. and IT infrastructure flexibility (Kumar. These measures can then be used to evaluate performance of IT in different phases of systems development life cycle. (1993). Rather. Mahmood and Mann (1993) attempted to identify such a correlation. e. user satisfaction.
External and internal organizational factors play another key role in influencing the type of IT investment. efficiency. Grover et al. structure. e. and accuracy. makes comparison across studies difficult. suitability and appropriateness of the measures for different types of IS have not been looked into. Fitzgerald (1998) claims that it is inappropriate to evaluate an IT project which aims to improve organizational effectiveness with financial measures. A simple productive measure. cost center. This understanding can then be helpful in developing a standard IS evaluation instrument. measures of multiple dimensions are needed. II. The importance of each measure varied according to the types and objectives of IS. goals. A review of IT value articles published in four leading management information systems journals from 1993 to 1998 revealed that organizational factors. i. A post-implementation review is often performed with a large number of evaluation measures that are not specific to the type of IS under evaluation. They suggested that it was possible to develop a standard scale to represent each category of the universal measures. had influences on the selection of either qualitative and quantitative-based measures (Chan. A comprehensive evaluation should examine the purpose and strategy of IT management as well as take into consideration such factors as external environment. IV. after conducting a review of past literature and taking into consideration three evaluation approaches. strategy. He suggests that performance measures be grouped in terms of management objectives. As a result of different types of IT investment. the nature of measures has also shifted from quantitative to qualitative aspects (Sugumaran and Arogyaswamy. and VI.JSIT 12. strategies. the involvement of a wide variety of measures in IT benefits studies. Class IV refers to usage measures that depict the use of IS in terms of ease-of-use and . and structure. unit of analysis and evaluation types.e.g.1 48 has limited meaning. As we move from cost center to investment center. They found out that there were differences among IS in terms of measures. efficiency. Klein et al. (1997) conducted a study to investigate if measures and their importance varied according to the types of IS. he proposes a two-stage benefit realization framework – a benefit eventually leads to a positive behavior at the next stage. e. is not adequate to wholly reflect the business contribution of IT. Universal measures did exist. quantitative measures that depict financial and productive effect of IS use. To evaluate the effectiveness of such a project.g. and culture. 2003/2004). Class III refers to economic. To better understand the construct of IS effectiveness and its measures.g. although offers breadth and depth. e. Class II refers to market measures that depict market reaction to the introduction of IS. For example. user impact measures were found to be of about equal importance for all types of IS examined. Stivers and Joyce (2000) suggest that evaluation measures should reflect the organizational strategy. e. 2000). Each type of IT investment calls for a different set of performance measures. However. Although past studies have tried different types of measures. V. decision and organizational impact measures were of high importance for higher-level support functions of DSS and information reporting systems. (1996). The challenge faced by IS researchers and business managers is to fully capture the benefits in both stages as some benefits might see 100 percent realization. service center and investment center. III. propose a framework to classify IS effectiveness measures into six different types: classes I. culture. A study into the underlying structure of the measures helps isolate common measures that are appropriate for each type. as highlighted by Devaraj and Kohli (2000). evaluation referent.g. Class I refers to infusion measures that depict information quality in terms of completeness.
They reckon that effective IS can help organizations achieve organizational effectiveness. there is a gap between what is being theorized and practiced. Mahmood and Mann (1993) agree that past studies largely ignore the use of a conceptual framework in evaluating IT investment and organizational performance. It is clear from the review that stakeholder perspectives. Third. They are of the opinion that a good performance measurement approach should be supported by an underlying theoretical framework. (2002) share several limitations of current evaluation approaches for effective IS evaluation: limited use or absence of methods. and perceptions about IS. this paper concludes with several suggestions for further improvement in IT evaluation practices. He advises that in choosing measures of IS success. The real value of IT becomes apparent when IT helps organizations to be more effective and efficient. Mukhopadhyay and Cooper (1993) stress the need for a theoretical underpinning to help assess business value of IS. unrealistic assumptions. and lack of underlying theoretical frameworks. In making IT investment decisions. In this case. they are not easy to be applied. It was concluded that not much progress had been made in the understanding of the dependent and independent variables. and organizational journals. technical. Class V refers to perceptual measures that depict user attitudes. For example. business managers still follow their instincts or conduct some simple analysis. only case studies are used. there is a heavy focus on the measures but not on the evaluation process. they propose the use of Competing Values Framework which describes the information processing capability of IS. Renkema and Berghout (1997) observe that evaluation techniques need further validation. very frequently. They give two reasons. beliefs. there were no measures of IS success at the organizational level. New measures are also necessary to reflect the increasingly intangible nature of strategic system. In the review. Arnold (1995) highlights that the use of surrogate measures in evaluating IS success might not reflect the real situation. Conclusion Having identified the eight issues and challenges. IS success was the dependent variable and success factors were the independent variables. There is the question of construct validity.motivational aspects of use. They report several observations. Cronk and Fitzgerald (1999) mention that a major weakness of measurement instruments is the lack of construct validity. To help develop such a theory. Other researchers have pointed out the same. Class VI refers to productivity measures that depict the impact of IS on organizational performance in terms of managerial performance and productivity. Hallikainen et al. researchers should use system and organization-specific measures. 3. After reviewing 212 articles published in eleven IS. Even if there is an attempt to do so. First. Second. Stockdale and Standing (2006) suggest that the use of validated measures helps add to the body of knowledge. the proposed theories are not complete and hence not useful to business managers. Particularly. evaluation dimensions and evaluation measures Information technology evaluation 49 . 2. Cooper and Quinn (1993) also suggest that a theory is needed in evaluating IS effectiveness. Second. evaluation measures involved in these studies are rather superficial. non-financial evaluation techniques are not supported by theory. First. when nonfinancial measures are used. unpractical.8 Underpinning theoretical frameworks Bannister and Remenyi (2000) claim that IT evaluation practices have not advanced much. Larsen (2003) proposes a taxonomy which includes twelve categories of organizational IS success factors.
pp. A well-defined evaluation scope can be useful as it presents a context to ease decision-making. 13 No. (1999). (2003). Lastly. F. Bannister. 6 No.1 50 occupy the top three positions in the issue and challenge list. tactical. A. Vol. 9 No. and Remenyi. Decision Support Systems.JSIT 12. A. level of analysis. 1. and Mukhopadhyay. pp. 2. qualitative.. a correct decision has to be made on the level of analysis and unit of analysis for an effective IT evaluation. pp. or all IT applications (Seddon et al. S. and business managers should recognize that as evaluation scope has overall implications for the other seven issues. single or multiple? Who are the stakeholders? Is there an order of priority? Next. 31-43. (1995). e. C. International Journal of Information Management. Vol. Arnold. E.. 22 No. T. An IT evaluation can be ex-ante. and underpinning theoretical frameworks. Again. ‘‘Discussion of an experimental evaluation of measurements of information system effectiveness’’. several questions need to be answered: What are the evaluation objectives? What type of IT. 231-41. a type of IT. Management Decision. value and IT investment decisions’’. 1. Ammenwerth. pp. Kriebel. and unit of analysis. 1999)? What are the intents and uses of IT. Ballantine. Journal of Information Technology. e. IS specialists. 15 No. Vol. (1998).. G. . IS researchers. Equally important. IS specialists. ‘‘Key antecedents of executive information system success: a path analytic approach’’. I. 197-206. (1995).g. tactical. and Konig. quantitative. 2. F. Vol.. pp. D. Barua. Vol. 3-23. Journal of Information Technology. D. it is essential to first define the evaluation scope. 37 No. ‘‘Financial appraisal and the IS/IT investment decision making process’’. and Remenyi. 2. 129-37. time. Bannister. V. pp. e. operational. several questions need to be answered: How many perspectives and why. Anandarajan. A. 29 No. and Stray. 3. pp. a single IT application. ‘‘The societal value of ICT: first steps towards an evaluation framework’’. or at any stage of the system development life cycle so long as the evaluation fulfils the agreed evaluation objectives. and Wen. pp. 1.g. 3-14. However. an underpinning theoretical framework or model should be adopted whenever possible. cost.g. operational.g. To define the scope. 125-35. References Agourram.g. ‘‘Acts of faith: instinct. questions about the evaluation dimensions: How many dimensions? What are the dimensions? Do these dimensions belong to similar or diverse groups? Are these dimensions associated with the stakeholders identified earlier? Then. or strategic? With the top four issues considered. ‘‘Evaluation of health information systems – problems and challenges’’. Journal of Information Systems. Vol. D.. 85-91. evaluation dimensions and evaluation measures. H. and experience? A clearly-defined evaluation scope helps a great deal in providing pointers for the issues of stakeholder perspectives. 6 No. (2000). Vol.. questions about the evaluation measures: How many measures within each of the individual dimensions identified earlier? What are the measures? What type of measures. Start first with the stakeholder perspectives. Vol. Bajwa. ‘‘Defining information system success in Germany’’. Rai. e. J. and Brennan. e. 4. ex-post. (1998). (2009). Vol. and business managers can then move on to deliberate evaluation timing. 329-37. 71 Nos 2/3.H. Burkle. S. ‘‘Information technologies and business value: an analytic and empirical investigation’’. Information Systems Research. IS researchers. or strategic? What is the depth and breadth of evaluation? What are the evaluation constraints. pp. Electronic Journal of Information Systems Evaluation. Herrmann. ‘‘Evaluation of information technology investments’’. Graber. J. H. (2003). International Journal of Medical Informatics. T.S.
R. R. 4. 275-92. 45 No.J. Journal of Information Technology. Y. 33 No. 1. Cronholm. 1.Y. Vol. and Targett. G. European Journal of Operational Research. and Remenyi. Doll. Information & Management. Electronic Journal of Information Systems Evaluation. ‘‘Developing a multidimensional measure of system-use in an organizational context’’. pp. 1026-46. and Fitzgerald. (2006). ‘‘Measuring supply chain performance’’. G. 65-74. M. and Kohli. Vol. Vol. and Ballantine. 4. (1999). 3-21.T. Vol. M. Cronk. and Tzeng. Journal of Information Technology. pp. ‘‘The eleven years of the European conference on IT evaluation: retrospectives and perspectives for possible future research’’.W.M. ‘‘IT value: the great divide between qualitative and quantitative and individual and organizational measures’’. M. 7 No. 1. 257-69. W. The Journal of Economic Perspectives. ‘‘Evaluating investments in IT’’. and Philip. Logistics Information Management. (1993). 3.H. pp. B. 1-28. 175-201. PC Week. G. ‘‘Evaluating IT/IS investments: A fuzzy multicriteria decision model approach’’. 4. pp. D. E. pp. 8 No. (1990). and Kohli.N. N. ‘‘IT governance for enterprise resource planning supported by the DeLone-McLean model of information systems success’’. pp. Chou. Brynjolfsson. (2003). Vol. Advances in Computers. Vol. ‘‘Discovering potential and realizing value from information technology investments’’. L. and Quinn. Brady. Peters. (1996). D. E. 16 No. International Journal of Service Industry Management. (1999). 171-85. Information Systems Research. 173 No. Vol. ‘‘Information technology payoff in the health-care industry: a longitudinal study’’. 225-61. pp. 6 No. 12 Nos 1/2. Vol. ‘‘An empirical study of the use of EDI in supermarket chains using a new conceptual framework’’. E. M. 60-95.C. Devaraj. Eskow. (2000). S. W. Vol. 3. 2. (1998). 2. pp. pp.M. Vol. Farbey. (2001). 3.Beamon. and Yang. pp. Chan. 40-9. pp. ‘‘Measuring performance is key to finding value of IS’’. Davern. pp. ‘‘Performance impacts of information technology: is actual usage the missing link?’’. 81-98. pp. Management Science. ‘‘The effectiveness of information systems in supporting the extended supply chain’’. Vol. S. Saren. ‘‘Implications of the competing values framework for management information systems’’. (2008). Chou. E. (1999). Devaraj. S. 3 No. pp. 758-67.. 2. ‘‘The impact of IT on marketing: an evaluation’’. J. and Kauffman. 14 No. ‘‘Performance measurement in service businesses revisited’’. and Goldkuhl.. 1. (2000).J. Vol. 7 No. (1996). D. organizational transformation and business performance’’. Management Decision. 1. Journal of Management Information Systems. Vol. and Sharman. Vol. 41-67. 5.B. ‘‘Information technology and productivity: a review of the literature’’. 16 No. (2000). B. G.P. R. pp. and Tzokas. and McLean. 121-43. 49 No. 14 No. Brignall. p. P. Berghout. ‘‘Beyond computation: information technology. 19 No. International Journal of Operations & Production Management..E. 16 No. 4. M. C. 43 No. DeLone. R. 10. (2000). Vol. 179-214. Vol. pp. (1999). Vol. Journal of Management Information Systems. Journal of Management Information Systems.. pp. 32 No. R. G. Vol. Electronic Journal of Information Systems Evaluation. pp. Edwards. 6-31. 273-89. pp. 7 No. and Hitt. S. Vol. 4. S. (1992). Vol. and Torkzadeh. 1. S. E. 160.J. Brynjolfsson. Land. 37 No. Information & Management. 23-48. Vol. 22. 109-22. 22 No. ‘‘Strategies for information systems evaluation: six generic types’’. Information technology evaluation 51 . (1992). ‘‘Understanding IS business value: derivation of dimensions’’. pp. (2005). T. ‘‘Information systems success: the quest for the dependent variable’’. F. Journal of Business Logistics. Bernroider. E. Fearon. Cooper. (2003). Human Resource Management.
R.D. Systems Research and Behavioral Science. 16 No. 1103-22. G. and Segars. pp. A. 161-77. 199-211. P. J. 3. 39 No. Vol. Jurison. Vol. Vol. 1. Duda. 177-91.. 40 No. A. pp. ‘‘Information technology and systems justification: a review for research and applications’’. 6.1 52 Fitzgerald. 18 No. Vol. D. Vol. Jeong. pp. pp. (1998). 3. Vol. HI. 5 No. ‘‘The temporal nature of is benefits: a longitudinal study’’. and Grieve. MIS Quarterly. Ngai. Journal of Management Information Systems. 79-86. pp. C. Hamilton. 173 No. Journal of Information Technology. (2000/2001). 2977-86. ‘‘The propagation of technology management taxonomies for evaluating investments in information systems’’. and consumer surplus: three different measures of information technology value’’. Gable. and Meinert. D. ‘‘Re-conceptualizing information system success: the IS-impact measurement model’’. 17 Nos 11/12.. 73-100. (1999). 695-706. ‘‘Information systems evaluation: navigating through the problem domain’’. H. (2002).J.. ‘‘Productivity.E. Kim. 68-72. pp.E. ‘‘Information systems effectiveness: the construct space and patterns of application’’. European Journal of Operational Research. Ezingeard.JSIT 12. Vol. E. G. Peterson. Vol. K. 2.S.. ¨ ¨ Hallikainen. 9-25. Information & Management. European Journal of Operational Research. S. 5 No. T. Gunasekaran.D. pp. Y. International Journal of Information Management. 20 No. Ghoneim.E. Vol. and Srinivas. 121-42. A. 229-43. MIS Quarterly. and Segars. (2008). 1. ‘‘ERP evaluation during the shakedown phase: lessons from an after-sales division’’. pp. 3. 495-518. (2008). ‘‘A critical approach to evaluation’’. J. ‘‘Evaluating cost taxonomies for information systems management’’. Technovation.T. .. V. pp. and Kahraman.H. P. P. 4. 7. Vol. Sharif. (2002).. Vol. Z. European Journal of Information Systems. 15-27. Information & Management. Information & Management.W. ‘‘Students’ perceptions on information systems success’’. 17 No. N. 13 No. pp. 1. ‘‘Evaluating information system effectiveness – part I: comparing evaluation approaches’’. Irani. The Journal of Computer Information Systems. Vol. ‘‘Integrating the costs of a manufacturing IT/IS infrastructure into the investment decision-making process’’.. and McGaughey. Z. and Chervany. Kivijarvi. P. 377-408. and Love. and Cornford. O. business profitability. 75-9. Information Systems Journal. International Journal of Production Economics. ‘‘Evaluating strategic IT investments: an assessment of investment alternatives for a web content management system’’.. Grover. T. Vol. Z. Irani. ¨ Hakkinen. Vol. E. E. Journal of the Association for Information Systems. Z. ‘‘A structured model for evaluating information systems effectiveness’’. (2002). and Chervany.. 3. (1981a). 2. ‘‘Evaluating information systems projects: a multidimensional approach’’. and Love. Grover. D.M. Irani. Kanungo. S. Hamilton. Vol. V. MIS Quarterly. Proceedings of the 35th Annual Hawaii International Conference on System Sciences (HICSS’02) in Big Island. pp. ‘‘The relationship between organizational characteristics and information system structure: an international survey’’.. Irani. and Nurmimaki. 30 No. R. 11-24. (1997). Klecun. S. S. pp. A. Vol.D. S. pp. 9 No. (1996). pp. ‘‘Evaluating information system effectiveness – part II: comparing evaluator viewpoints’’.N. pp. 3. (1996). (1999). ‘‘Applying concepts of fuzzy cognitive mapping to model: the IT/IS investment evaluation process’’. 957-83. Hitt. and Hilmola. 31 No. 3.E. 173 No. 16 No. pp. Z. pp. (2005). pp. L. Love. C. (1996). 75 Nos 1/2. L. (2006). R. (1981b). Irani.G. (2006). Vol. 55-69. A. pp. Sedera. and Brynjolfsson. (1996). 4. N. 14 No. 1.H. and Chan. Vol.
(2003). Vol. Vol. Doherty. V. 17 No. pp. (2002). and Kim. D. R. and Grover. Law. 423-35. and King. ‘‘A taxonomy of antecedents of information systems success: variable analysis studies’’. ‘‘The usefulness of computerbased information to public managers’’. pp. Vol.J. International Journal of Medical Informatics. 97-122. 4. N.. 37 No. ‘‘Information systems failures: a survey and classification of the empirical literature’’. (1993). 257-309. pp.K. J. pp. 9 No.. Kraemer. 241-55. (2005). L. and Balloun. 2. 4. P. 20 No. A. Journal of Management Information Systems. (2006). 377-85.L. pp. pp. Journal of Management Information Systems. 2.N.. ‘‘Researching the investment of information technology in construction: an examination of evaluation practices’’. Kumar. J. Vol. and Ngai. N. 29 No. 3. 19 No. Vol.J.. (1999). S. Kraemer.J. G. ‘‘Business value of IT: an essay on expanding research directions to keep up with the times’’. 169-246. 21 No. Decision Sciences. 145-56. and Hirschheim. Logistics Information Management. pp. 1.. 4. ‘‘Review: information technology and organizational performance: an integrative model of IT business value’’. Vol. pp. D.D. Vol. Vol. McAfee. 129-48.A. 6. N. ‘‘Measuring the organizational impact of information technology investment: an exploratory study’’. 4. M.Klein. Ghoneim. Mirani. K. L. S. and Stergioulas. D.D. pp.L. 11-32. ‘‘Management of information technology evaluation – the development of a managerial thesis’’. International Journal of Enterprise Information Systems. Marsh. (2008). ‘‘A lag effect of IT investment on firm performance’’. C. L. Lyytinen. Danziger. ‘‘Measuring the costs and benefits of information technology in construction’’. ‘‘A framework for assessing the business value of information technology infrastructures’’. (1987). and Mann. 2. pp. and Kim. R. K. Lee. ‘‘An instrument for assessing the organizational benefits of IS projects’’. 803-38. 23-39. Production and Operations Management. ‘‘IT business value research: a critical review and research agenda’’. pp. K. Love. ‘‘The stakeholder dimension in information systems evaluation’’. 33-53.. McAulay. Leem. 312-25. Vol. 14 No. 7 No. 17 No. and Flanagan. R. pp. ‘‘Information technology evaluation: classifying indirect costs using the structured case method’’. Automation in Construction.S. 4. 1 No.H. Melville.E. I. Larsen. Kohli. 181-6. ‘‘Investigating evaluation frameworks for health information systems’’. 569-82. Z. J. Industrial Management & Data Systems. Vol. R. Mohd. 1. Vol. V. Irani. 11 No. and Edwards. 28 No.C. Vol. A. Engineering Construction & Architectural Management. 283-322. ‘‘Information system evaluation by system typology’’. and Remenyi.. Vol. R. 35-55. Vol. Jiang. ‘‘The impact of enterprise information technology adoption on operational performance: an empirical investigation’’.J. (2004).. (1998).L.E. Oxford Surveys in Information Technology. R. Journal of Information Technology. 104 Nos 1/2. (2004). E. 115-28. Information Resources Management Journal.. (1997). (2000). MIS Quarterly. and Keval. Vol. pp. 77 No. Vol. pp. A. pp. pp. 1. M. ‘‘An integrated evaluation system based on the continuous improvement model of IS performance’’. 3.E. Journal of Management Information Systems. Z. G. A. 2. (2004). Information technology evaluation 53 . (1993). 4. MIS Quarterly. (2005). Journal of Enterprise Information Management. Journal of Systems and Software. Vol. pp. and Gurbaxani. 43-69. (2008). Lubbe. Dunkle.R. Paul. 12 Nos 1/2. Journal of the Association for Information Systems. C. S. K. J. and Lederer.T.W. 17 No. and Irani. (2002). pp. Vol. Vol. Mahmood. 10 No. Yusof. Love. P. pp. (2004). Papazafeiropoulou. 1.T.
2. pp.E. and Welker. 11-28.JSIT 12..F. 3. 22-9.P. 43 No. and Bowtell. Vol. pp.. and Cooper. and Igbaria. 409-19. (1996).. Information & Management. T. Vol.S. Vol. Vol. Skok. 42 No. M. (2002). Kophamel. pp. 1-39. Journal of Management Information Systems. M. 137-56. W. A. V. (1999). E. Palvia. S. and Kavan. Serafeimidis. 16 No. pp.P. Remenyi.B. Vol. pp. 5-14.R. E. D. B. Ragowsky.. (1999).B. 3.W. Information and Software Technology. 14-31. International Journal of Information Management. 99-126. 2. (2000).. 39 No. Kekre. D. and Richardson. (2002).. Shayo. and Sherwood-Smith. (1993). 12 Nos 1/2. ‘‘Service quality: a measure of information systems effectiveness’’. R. S. 7.. 2. ‘‘Building a balanced performance management system’’.S. Seddon. SAM Advanced Management Journal. Vol. 38 No. Vol. R. 277-92. MIS Quarterly. M. (2001). 20. Premkumar.. ‘‘The benefits of using information systems’’. W. I. S. ‘‘A microeconomic production assessment of the business value of management information systems: the case of inventory control’’. Vol. ‘‘Maximise information systems value by continuous participative evaluation’’. Lang. Moss. J. (1995). Vol. 11es. (1997). pp. 6.W.C. Vol. pp. A. T. Graeser. Perkins.. ‘‘The management of change for information systems evaluation practice: experience from a case study’’..B. S. Communications of the AIS. ‘‘An empirical assessment of information systems planning and the role of information systems in organizations’’. (1995). ‘‘Comparing IT success in manufacturing and service industries’’. pp. ‘‘Assessing the validity of IS success models: an empirical test and theoretical analysis’’.S. Staples. S. 1. Communications of the ACM. Vol. L. C. 19 No. L. ‘‘The PRISM system: a key to organizational effectiveness at Federal Express Corporation’’. C. Information Systems Research. pp. Renkema. 789-98. Watson. P. 9 No. 43-61. ‘‘Business value of information technology: a study of electronic data interchange’’. 1. Vol. (2001). L. pp. 33-56. pp. 1-13. Rai. 173-85. ‘‘A socio-technical framework for quality assessment of computer information systems’’. Seddon. Journal of Management Information Systems. R. 1. (1999). pp. Vol. 19 No. Vol. pp. ‘‘Diagnosing information systems success: importance-performance maps in the health club industry’’. 1. International Journal of Operations & Production Management. Scott. R. 1. ‘‘Information technology productivity: in search of a definite observation’’.W. (1992). Logistics Information Management.T. ‘‘Methodologies for information systems investment evaluation at the proposal stage: a comparative review’’. 101 Nos 5/6. S. S. pp. ACM SIGMIS Database. N. T. and Conrath. G.. A. P. V. J. 30-45.. (2005). pp. ‘‘Dimensions of information systems success’’. T. Industrial Management & Data Systems. MIS Quarterly. 10 No. Oz. 50-69. pp.1 54 Mukhopadhyay. (1992). ‘‘Measuring organizational IS effectiveness: an overview and update of senior management perspectives’’. 303-11. S. A. 65 No. Vol. Guthrie. pp. and King. P. Vol. Pitt. (2001). and Willcocks. and Smithson. S. 26 No. R. ‘‘The measurement of information systems effectiveness: evaluating a measuring instrument’’. Sohal. Information & Management. ‘‘Exploring the measurement of end user computing success’’. 13 No. Stivers. 11 No. 33 No. Sharma. (2000). and Kalathur. 205-17. Mukhopadhyay. Palvia. Journal of End User Computing. and Berghout. and Ng. 16 No. MIS Quarterly. pp. (1995). .J. Database for Advances in Information Systems. Ahituv. Vol. and Joyce. Vol. and Neumann. 2. 2 No. Patnayakuni. and Zeltmann. 21 Nos 1/2.B. 237-51. Vol. R. 2. pp.
process framework’’. Vol. 4. Vol. C.. Vol. Standing. 601-10. MIS Quarterly. Vol. 173 No. Journal of Applied Management and Entrepreneurship.L. pp. Automation in Construction.L. 35-49. pp. Management Decision. ‘‘An interpretive approach to evaluating information systems: a content. S. P. R. Vol. Logistics Information Management. Sarkis. Schniederjans. Vol. B. Webster. (2006). C. Willcocks. J. 3-17.T. M. ‘‘Managing investment in information technology: mini case examples and implications’’. (2003). 8-17.com/reprints . 13 No. The Journal of Computer Information Systems. (2008). pp. 1090-102. (2006). Corresponding author Govindan Marthandan can be contacted at: marthandan@mmu. 3. 5. (2003/2004). C. context. Weill. 13 No. 729-36. 1. MIS Quarterly. P. ‘‘IT-performance paradox revisited: resource-based and prisoner’s dilemma perspectives’’.S. J. S. ‘‘Information systems effectiveness: a user satisfaction approach’’. C. and Watson. 307-33. xiii-xxiii. J. and Talluri. 15 No. Weill. European Management Journal. and Lester. J.Stockdale. 14 No.Y. M. 6. pp. Stockdale.emeraldinsight. pp. (1996).edu.H. 44 No. (1999). pp. pp. 3 No. ‘‘Analyzing the past to prepare for the future: writing a literature review’’. (1989).D. Tangpong.com Or visit our web site for further details: www. 41 Nos 1/2.J. V. European Journal of Operational Research.E. Vol. ‘‘A new strategic information technology investment model’’. 12 Nos 1/2. Further reading Dasgupta. 2. and Arogyaswamy. Sugumaran. Information Systems Research. S. 1. 120-9. 26 No. (2002). pp. and Love. Vol. L. and Hamaker. 3. ‘‘Propagation of a parsimonious framework for evaluating information systems in construction’’. and Standing. Vol. Information Processing & Management. pp. pp. pp. 32 No. ‘‘Measuring IT performance: contingency variables and value modes’’. ‘‘Beyond the IT productivity paradox’’. Vol. P. (1992). 79-86. Thong. and Yap. R. 2. 279-90. ‘‘Influence of information technology investment on firm productivity: a cross-sectional study’’.. ‘‘The relationship between investment in information technology and firm performance: a study of the valve manufacturing sector’’.my Information technology evaluation 55 To purchase reprints of this article please e-mail: reprints@emeraldinsight. and Olson. R. Vol. (1996).
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.