You are on page 1of 29

Evaluation of Information

Systems
Definition of Evaluation
Dictionary:
assessment of value
the act of considering or examining something in order to judge its value,
quality, importance, extent, or condition
Evaluation of Information
Systems
• The adoption of Information Technology (IT) and Information Systems (IS)
represents significant financial investments, with alternative perspectives
to the evaluation domain coming from both the public and private
sectors.
• As a result of increasing IT/IS budgets and their growing significance
within the development of an organizational infrastructure, the
evaluation and performance measurement of new technology remains a
perennial issue for management.
• Evaluating Information Systems explores the concept of evaluation as an
evolutionary and dynamic process that takes into account the ability of
enterprise technologies to integrate information systems within and
between organizations. In particular, when set against a backdrop of
organizational learning. It examines the changing portfolio of benefits,
costs and risks associated with the adoption and diffusion of technology
in today's global marketplace .
Evaluation of Information
Systems (contd.)
• Within organizations, information systems evaluation is a key part of IS
investment decisions, for example in deciding whether to go ahead with a
particular project, or to modify or scrap a development. The evaluation
becomes a basis for comparison with other projects, both inside and
outside the organization.
• With increasingly large investments, these become decisions with
significant financial implications. Moreover, such decisions can have
significant implications for both groups and individuals within the
organization. People’s careers or working lives, or a group’s command of
resources or status, may be contingent on the evaluation of a particular
information system.
Evaluation of Information
Systems (contd.)
• An evaluation is normally a form of measurement or classification, a type of
inscription that translates a particular entity or situation, such as an IT investment,
into a quantifiable outcome against certain criteria.

 investment analysis, Financial implications (Cost-Benefit identification) attempts


to evaluate whether an IS investment is likely to be worthwhile financially
 impact analysis, looking at the effect a new system has had - or is likely to have -
on a particular group or organizational function
 performance measurement, focusing on how well a particular system is working,
typically using fairly low-level technical or financial metrics
 problem diagnosis, seeking to account for, and solve, a particular IS problem
 risk assessment, examining the risks of projects failing in some way, and
 portfolio optimization, where an organization wishes to maximize its portfolio of
projects, rather than judging individual projects
Evaluation of Information
Systems (contd.)
• Evaluation occurs twice in the 'traditional' structured systems
analysis and design approach:
 first, in the feasibility phase, in which an attempt is made to
establish likely impact and costs, and,
 second, in the form of a post-implementation evaluation, which is
an attempt to measure what impact the system actually had.
This approach focuses on issues such as whether the project was
delivered on time, whether it was to budget and whether it met the
specifications, and ignores other issues, such as what the
stakeholders think of both the 'process' and the 'product‘. Post-
implementation evaluation is also criticized on the grounds that it is
generally conducted by system developers as a quick 'pack up and
get out' project closure activity.
Evaluation of Information
Systems (contd.)
• Broader studies, based on theories of
information economics, examine the value of
IS development on organizations based on
factors such as strategic impact, productivity
impact and consumer value, while another set
of studies, based on behavioral science,
addresses the relationship between IS
development and individual satisfaction and
decision-making behavior, and the consequent
impact on the organization.
Evaluating Information System
Effectiveness and Efficiency
• Section-One Why study effectiveness?
 Problems have arisen or criticisms have been voiced
in connection with a system;
 Some indicators of the ineffectiveness of the
hardware and software being used may prompt the
review;
 Management may wish to implement a system
initially developed in one division throughout the
organization, but may want to first establish its
effectiveness;
 Post-implementations review to determines whether
new system is meeting its objectives.
Indicators of System Ineffectiveness

excessive down time and idle time


slow system response time
excessive maintenance costs
inability to interface with new hardware/software
unreliable system outputs
slow system response time
data loss
excessive run costs
frequent need for program maintenance and modification
user dissatisfaction with output format, content or timeliness.
Approaches to Measurement of
System Effectiveness
•Goal-centered view - does system achieve goals set out?
•Conflicts as to priorities, timing etc. can lead to objectives met
in the short run by sacrificing fundamental system qualities,
leading to long run decline of effectiveness of the system
•System resource view - desirable qualities of a system
are identified and their levels are measured.
•If the qualities exist, then information system objectives, by
inference, should be met. By measuring the qualities of the system
may get a better, longer-term view of a system's effectiveness.
•The main problem– measuring system qualities is much
more difficult than measuring goal achievement.
Types of Evaluations for System
Effectiveness
•Relative evaluation - auditor compares the state of goal
accomplish. after the system implemented, with the state
of goal accomplishment before system implemented.
•Improved task accomplishment, and
•Improved quality of working life.

•Absolute evaluation - the auditor assesses the size of the


goal accomplish. after the system has been implemented.
•Operational effectiveness,
•Technical effectiveness, and
•Economic effectiveness.
Task Accomplishment

•Providing specific measures of past accomplishment that


auditor can use to evaluate IS is difficult.
•Performance measures for task accomplishment differ
across applications and sometimes across organizations.
•For a manufacturing control system might be:
•number of units output,

•number of defective units reworked, units scrapped

•amount of down time/idle time.

•Important to trace task accomplishment over time.


System may appear to have improved for a short time
after implementation, but fall into disarray thereafter.
Quality of Working Life

•High quality of working life for users of a system is a


major objective in the design process. Unfortunately,
there is less agreement on the definition and
measurement of the concept of quality of working life.
•Different groups have different vested interests - some
productivity, some social
•Major advantages - relatively objective, verifiable, and
difficult to manipulate. Data required is relatively easy to
obtain.
•Major disadvantages - it is difficult to link them to IS quality
and difficult to pinpoint what corrective action is needed
Operational Effectiveness Objectives

•Auditor examines how well a system meets its goals


from the viewpoint of a user who interacts with the
system on a regular basis. Four main measures:
• Frequency of use,
• Nature of use,
• Ease of use, and
• User satisfaction.
Frequency and Nature of Use

•Frequency - employed widely, but problematic


•sometimes a high quality system leads to low frequency of use
because the system permits more work to be accomplished in a
shorter period of time.
•sometimes a poor quality system leads to a low frequency of use
since users dislike the system

•Nature - can use systems in many ways


•lowest level: treat as black box providing solutions to the
•highest level: use to redefine how tasks, jobs performed and viewed
Ease of Use and User Satisfaction

•Ease of use - positive correlation between users' feelings about


systems and the degree to which the systems were easy
to use. In evaluating ease of use, it is important to
identify the primary and secondary users of a system.
•Terminal location, flexibility of reporting, ease of error correction

•User satisfaction - has become an important measure of


operational effectiveness because of the difficulties and
problems associated with measures of frequency of use,
nature of use, and ease of use.
•problem finding, problem solving, input, processing, report form
Technical Effectiveness Objectives -

•Has the appropriate hardware and software technology


been used to support a system, or, whether a change in
the support hardware or software technology would
enable the system to meet its goals better.
•Hardware performance can be measured using hardware monitors
or more gross measures such as system response time, down time.
•Software effectiveness can be measured by examining
the history of program maintenance, modification and
run time resource consumption. The history of program
repair maintenance indicates the quality of logic existing
in a program; i.e., extensive error correction implies:
inappropriate design, coding or testing; failure to use
structured approaches, etc.
•Major problem: hardware and software not independent
Economic Effectiveness Objectives -

• Requires the identification of costs and benefits and the


proper evaluation of costs and benefits - a difficult task since
costs and benefits depend on the nature of the IS.
•For example, some of the benefits expected and derived from an IS
designed to support a social service environment
would differ significantly from a system designed to
support manufacturing activities. Some of the most
significant costs and benefits may be intangible and
difficult to identify, and next to impossible to value.
SECTION TWO - Evaluating system efficiency

•Why would an auditor get involved in a study of system


efficiency?
•evaluate an existing operational system to
determine whether its performance can be improved;
•evaluate alternate systems that the installation is
considering purchasing or leasing. For example,
management may be considering two systems with
different database management approaches.
•To determine whether a system is efficient, the auditor
will need to identify:
• an appropriate performance index to assess system efficiency.
• an appropriate workload model to measure the system's
performance in the context of that workload.
Performance Indices

•Measure system efficiency; quantitatively how well system


achieves an efficiency criterion. Have several functions:
•allow users to decide whether a system will meet needs,
•permit comparison of alternate systems, and
•show whether changes to the hardware/software configuration of
system have produced the desired effect.
•Expressed using ranges or probability distributions - avg.
may be deceiving (look at response time variations)
•Expressed in terms of workload - e.g., response time
of an interactive system will vary depending on the
number and the nature of the jobs in the system.
Indices - Timeliness

•How quickly a system is able to provide users with the


output they require.
•For a batch system, typically is turnaround time - the
length of time between submission of a job and receipt
of the complete output.
•For interactive systems, the response time - the length
of time between submission of an input transaction to
the system and receipt of the first character of output.
•Must be defined in terms of a unit of work and the priority
categorization given to the unit of work.
•In a batch system the unit of work usually is a job.
•In an interactive system it may be a job consisting of
multiple transactions, or a single transaction.
Indices - Throughput & Utilization

•Throughput indices measure how much work is


done by the system over a period of time.
•Throughput rate of a system is the amount of work done per unit of time.
•The system capability is the maximum achievable throughput rate.
•Throughput indices must be defined in terms of some
unit of work: a job, a task, or an instruction.
•The more responsive a system, the greater its throughput.
•Utilization indices measure the proportion of time a
system resource is busy.
•For example, the CPU utilization index is calculated by
dividing the amount of time the CPU is busy by the
total amount of time the system is running.
Workload

•A system's workload is the set of resource demands


imposed upon the system resources by the set of jobs
during a given time period.
•Using the real workload of the system for evaluation may be
too costly and too disruptive.
•To measure efficiency for a representative workload, the
time period for evaluation may be too long.
•Also, the real workload cannot be used if the system to be evaluated is
not operational.
•Need a workload model representative of the real workload
Workload Models

•Natural workload models, or benchmarks, are


constructed by taking some subset of the real workload.
•In a time subset, the content of the workload model is the same as
the real workload, but the time interval for performance indices
is smaller than the interval for the real workload.
•In a content subset, sample jobs from the real workload are
selected in some way.
•Artificial workload models not constructed from jobs in the
real workload; useful when system unable to process the
natural workload
•Natural - more representative and less costly to construct
•Artificial - more flexible and more compact
SECTION 3- Comparison of 3 Audit
Approaches - Objectives
•F/S audit - express an opinion as to whether financial
statements are in accordance with GAAP
•Effectiveness audit - express an opinion on whether a
system achieves the goals set for the system. These
goals may be quite broad or specific.
•Audits of system efficiency - whether maximum output is
achieved at minimum cost or with minimum input
assuming a given level of quality.
Comparison of 3 Approaches - Planning

•F/S audit - part is identifying controls upon which the auditor


could rely and reduce other audit verification
procedures; or, id controls upon which the auditor is forced
to rely
•Effectiveness audit - id goals, measures for determining
whether the goals obtained during a specific period, if
explicit measures are more straight-forward; however, when
broad and multi-dimensional, the auditor may need to
develop relevant measures and indicators of achievement.
Comparison of 3 Approaches –
Planning (Contd.)
• Audits of system efficiency - often comparable to
a scientific experiment. A scheme for obtaining
measurements must be developed explicitly for
the performance index defined. For example, if
average turnaround time is used as a measure
of efficiency, then the experimental task must
control for various job sizes, time of day, etc.
Comparison of 3 Approaches - Execution

•F/S audit - controls analysis and CAATs


•Effectiveness - Once the system goals have been
identified, measures of goal achievement have been
selected, and the population to be studied has been
identified, it is necessary to actually obtain measures of
goal achievement and analyze the results.
•Efficiency - During the execution phase the benchmark
or workload model test is actually run and the result are
subjected to analysis. Care must be taken to
control for interference by factors other than those built
into the model. And measurements must be taken
carefully.
Comparison of 3 Approaches - Reporting

•F/S audit - letter re I/C deficiencies


•Effectiveness - the analysis will likely highlight areas of
successful attainment of objectives as well as failures.
Explanations of the causes of significant successes and
failures should be sought out and included in the report.
•Efficiency - reports of studies of system efficiency must
typically contain specific recommendations identifying
ways in which the identified inefficiencies can be
eliminated.

You might also like