You are on page 1of 60

EVALUATING

GOVERNMENT PROGRAMS -
A HANDBOOK

(Originallypublishedin1986bytheAustralianGovernmentPrinting
ServicefortheAustralianFederalDepartmentofFinance)

KeithLinard
(FormerlyChiefFinanceOfficer,AustralianFederalDepartmentofFinance)
DirectorAnkieConsultingPtyLtd

Keithlinard#@#yahoo.co.uk (Remove hashes to email)


FOREWORD
The resources available to the community are limited, and are generally considered to be
insufficient to provide all the goods and services which individuals or groups might want.
As a community, therefore, it is important to try to allocate these scarce resources so as to
achieve the greatest overall satisfaction. Colloquially this might be expressed as getting
the "best value for money".

In many areas of the economy the market mechanism promotes and facilitates efficient
resource usage. Most government activities, however, are not subject to the market
mechanism and other approaches are necessary to indicate to the public, Cabinet and
Parliament the value for money of these activities. Evaluation performs this role.

Just as business managers consider new investment proposals and review existing
priorities in relation to their impact on business goals, new expenditure proposals and
current programs of Commonwealth agencies s
should be evaluated in the light of their impact on Government goals.

Evaluation is a key element of the Government's budgetary and financial reforms as


expressed in its Policy Paper on Budget Reform (April 1984). The Government's stated
aims are:
. to develop better means of identifying and setting budgetary priorities, in order to
ensure that the best overall result is achieved in terms of meeting the
Government's objectives with the resources which can be made available;
. to focus attention more clearly on the goals and objectives of particular programs,
in relation to the resources they use;
. to develop and apply to the management of Commonwealth programs specific
techniques aimed at improved performance and more efficient resource use; and

. to set up machinery to ensure that the effectiveness and efficiency of programs are
reviewed regularly, and that the results of such reviews are taken into account in
the ongoing evaluation of budgetary priorities."
The Financial Management Improvement Program and Program Budgeting are currently
the key vehicles for focussing attention on the importance of evaluation.
This Handbook was prepared primarily by the FMIP Unit in the Department of Finance.
The Handbook draws on the experience of other governmental agencies in addressing the
evaluation of programs, including Commonwealth Government specialist evaluation
units such as the Bureau of Transport Economics and the former Bureau of Labour
Market Research, Victorian and NSW State Government agencies, the Canadian
Treasury Board, the US General Accounting Office and the OECD. Professional
evaluators and others, both within and outside the Commonwealth Government, have
provided valuable comments on early drafts.

2
TABLE OF CONTENTS
FOREWORD ................................................................................................................................................. 1
FOREWORD ................................................................................................................................................. 2
TABLE OF CONTENTS ............................................................................................................................... 3
INTRODUCTION .......................................................................................................................................... 6
CHAPTER 1 ................................................................................................................................................... 7
PROGRAM EVALUATION ..................................................................................................................... 7
Defining Program Evaluation ................................................................................................................. 7
The Program Evaluation Process............................................................................................................ 7
What Programs should be evaluated and when? .................................................................................... 8
What should be the size of the Program Element for a particular evaluation? ....................................... 8
CHAPTER 2 ................................................................................................................................................... 9
DESIGNING THE EVALUATION - THE PRE-EVALUATION ASSESSMENT ................................. 9
Issues to be Addressed in Pre-Evaluation Assessment ............................................................................. 12
Step I - Define the Purpose of the Evaluation ...................................................................................... 12
Step 2 - Define Nature, Scope and Objectives of Program .................................................................. 13
Step 3 - Analyse the Program Logic..................................................................................................... 15
Step 4 - Specify Alternative Ways of Meeting Program Objectives .................................................... 17
Step 5 - Identify the Key Evaluation Issues ......................................................................................... 19
Step 6 - Identify the Evaluation Constraints ......................................................................................... 21
Step 7 - Assess Appropriate Evaluation Designs ................................................................................. 22
Step 8 - Develop Strategy for Evaluation Study................................................................................... 29
CHAPTER 3 ................................................................................................................................................. 30
UNDERTAKING THE EVALUATION STUDY ................................................................................... 30
Detailed Work Plan .............................................................................................................................. 30
The next step is to develop a schedule covering: ................................................................................. 30
Principles for the Conduct of an Evaluation Study .................................................................................. 31
CHAPTER 4 ................................................................................................................................................. 34
THE EVALUATION REPORT ............................................................................................................... 34
Table of Contents ................................................................................................................................. 34
Executive Summary.............................................................................................................................. 35
Introduction .......................................................................................................................................... 35
The Substance of the Report................................................................................................................. 35
Findings and Conclusions .................................................................................................................... 35
Recommendations ................................................................................................................................ 36
Resource Issues .................................................................................................................................... 36
Appendices ........................................................................................................................................... 36
CHAPTER 5 ................................................................................................................................................. 37
REVIEWING THE ADEQUACY OF THE EVALUATION ................................................................. 37
APPENDICES .............................................................................................................................................. 39
APPENDIX A .............................................................................................................................................. 40
TYPES OF EVALUATION ..................................................................................................................... 40
WHO EVALUATES AND WHEN? ....................................................................................................... 42
CHECKLISTS .............................................................................................................................................. 44
PRE-EVALUATION ASSESSMENT AND EVALUATION STUDY .................................................. 44
B.1: STEPS IN THE PRE-EVALUATION ASSESSMENT .............................................................. 44
B.2: PURPOSE OF THE EVALUATION .......................................................................................... 46
B.3: NATURE, SCOPE AND OBJECTIVES OF THE PROGRAM ................................................. 47
B.4: ANALYSE THE PROGRAM LOGIC ........................................................................................ 48
B.5: IDENTIFY ALTERNATIVES .................................................................................................... 49
B.6: IDENTIFY KEY EVALUATION ISSUES ................................................................................ 50
B.7: IDENTIFY EVALUATION CONSTRAINTS ........................................................................... 51
B.8: ASSESS APPROPRIATE EVALUATION DESIGNS .............................................................. 52
B.9: DEVELOP STRATEGY FOR EVALUATION STUDY ........................................................... 53
B.10: STEPS IN THE EVALUATION STUDY* .............................................................................. 54
APPENDIX C: SUGGESTED OUTLINE FOR PRE-EVALUATION ASSESSMENT REPORTS ......... 55
1. An Executive Summary which includes: .................................................................................... 55
2. An Introduction which indicates: ................................................................................................ 55
3. A Program (Element) Description which describes: ................................................................... 55
4. A Summary of the Analyses Conducted which includes: ........................................................... 55
5. Possible Evaluation Designs indicating: ..................................................................................... 55
6. Strategy for the Evaluation Study including: .............................................................................. 56
APPENDIX D: SUGGESTED OUTLINE FOR EVALUATION REPORTS............................................ 57
1. Table of Contents ........................................................................................................................ 57
2. Executive Summary .................................................................................................................... 57
3. Introduction ................................................................................................................................. 57
4. The Substance of the Report ....................................................................................................... 57
5. Findings and Conclusions ........................................................................................................... 57
6. Recommendations ....................................................................................................................... 58
7. Resource Issues ........................................................................................................................... 58
8. Appendices.................................................................................................................................. 58
BIBLIOGRAPHY ........................................................................................................................................ 59

TABLE OF FIGURES

Figure 1: The Program Evaluation Process .......... 4


2: Steps in the Pre-Evaluation Assessment .. 8

3: Program (Element) Description .......... 11


4: Logic Model for a Typical Government

Program ............................... 13
5: Illustration of Analysis of Program
Logic ................................. 14

6: Phase Diagram: Logic, Assumptions,

Evaluation_ Questions and Performance


Indicators ............................ 16

7: Selecting alternative solutions

for analysis .......................... 18


8: Key Evaluation Issues .................. 19

9: Characteristics of Evaluation Designs .. 22


10: Conditions under which quasi-experimental

4
and experimental designs are most

likely to be appropriate .............. 26


11: Illustrations of Evaluation Designs .... 27

Figure A1: Types of Evaluation .................... 50

A2: Program and Evaluation Cycle ........... 51

A3: Types of Evaluation relevant at


different stages of Program Cycle ..... 52

A4: Types of Evaluation and their

Characteristics ....................... 53
INTRODUCTION
A principal purpose of evaluation is to assist decision making on the allocation or
application of resources. The primary purpose of this Handbook is to provide those
involved in evaluation with a framework for the planning, conduct and reporting of
program evaluations. In developing this framework particular attention is paid to
questions concerning the effectiveness, including relevance and priority, of major
program elements.

The Handbook does not attempt to prescribe specific evaluation techniques; rather it
suggests principles which might usefully guide the conduct of evaluations. The FMIP
Evaluation Training Module and associated training material address the subject of tools
and techniques.
The Handbook focuses on internal reviews, the clients of which are the heads of the
relevant agencies. This reflects the fact that it is they who primarily are responsible for
the efficient and effective use of the their agency's resources. The principles discussed in
the Handbook, however, apply equally to external reviews which play an important role
in providing the Government and Parliament with an assessment of the extent to which
policy goals and objectives are being met.

Readers are referred to the Department of Prime Minister and Cabinet's "Review of
Government Policies and Programs - Guidelines" for a checklist of matters, not covered
in this Handbook, to be considered in the preparation of submissions to Ministers for
external reviews of policies or programs.
The general evaluation principles which are discussed are relevant to virtually any
evaluation exercise, ranging from day-to-day monitoring of programs to in-depth
evaluations. As such, the Handbook is directed at both professional evaluators and
program managers. It is also designed to assist managers who have the responsibility for
"quality control" of evaluations.

"Evaluation", unfortunately, is a term which has come to mean different things to


different people. Each discipline has developed its own "evaluation jargon" with
resulting communications problems. Appendix A discusses and defines the different
evaluation terms as they are used under FMIP and Program Budgeting, and how the
various types of evaluation fit into the different stages of the program cycle.

6
CHAPTER 1
PROGRAM EVALUATION

Defining Program Evaluation

Program evaluation is a systematic assessment of all or part of the program activities to


assist managers and other decision makers to:
. assess the continued relevance and priority of the program objectives in the light
of current circumstances, including government policy changes;

. test whether the program outcomes achieve the stated objectives;

. ascertain whether there are better ways of achieving these objectives;


. decide whether the resources for the program should continue at current levels, be
increased, reduced or discontinued.

Program evaluation in this handbook encompasses any evaluation of a program element,


and may cover one or more of the above issues.

The Program Evaluation Process

Systematic program evaluation in an agency involves five processes or steps (see also
Figure 1):

1. Management establishes a framework for evaluation, which covers procedures for


initiating, undertaking, oversighting and acting upon evaluations; resources to be
applied to evaluations; and a schedule for evaluating all agency program elements
on a regular cycle.
2. Pre-evaluation assessment provides a rational basis for determining the nature of
a particular evaluation study and the resources which might appropriately be
allocated to the task. Chapter 2 discusses the steps which are suggested for the
pre-evaluation assessment.

3. Management considers the recommendations arising from the pre-evaluation


assessment, approves an evaluation strategy (including terms of reference) and
allocates resources to a full scale evaluation study.

4. Evaluation study in which data are collected and analysed, conclusions are drawn
and recommendations are made. Chapters 3 and 4 discuss the principles for
undertaking and reporting evaluation studies.
5. Management consideration of and action on evaluation study conclusions and
recommendations.
What Programs should be evaluated and when?

The continued effectiveness, including relevance and priority, of every program element
in a portfolio should be evaluated at least once every three to five years. The efficiency
with which each is being implemented or administered should be reviewed more
frequently. This suggests a systematic prioritising of program elements into a rolling
schedule of evaluations.

The approach adopted to evaluation will vary according to the nature of the program.
Thus service delivery program elements require evaluation with respect to their
objectives; more commercially oriented activities may be amenable to traditional
economic and financial benefit-cost analysis; while administrative support activities in
an agency may more appropriately be examined through traditional internal audit
procedures in an efficiency context.

All managers at all levels should monitor their particular program elements against key
performance indicators on an ongoing basis.

What should be the size of the Program Element for a particular


evaluation?

Whether a given program evaluation examines an entire program, a sub-program,


component or lower element in the program hierarchy will depend on what is being
sought from the evaluation.

Evaluating an entire program in the one exercise runs the risk of being too generalised;
objectives statements at this level of the program hierarchy are often too broad to permit
meaningful measures of performance.

Evaluating at a low level of the hierarchy runs the risk of being too narrowly focussed,
either ignoring the implications of other program elements which are directed to common
objectives or ignoring other approaches to achieving the same objectives.

In general, however, program evaluation would be focussed at the component level or


lower.

______

8
CHAPTER 2
DESIGNING THE EVALUATION - THE PRE-EVALUATION
ASSESSMENT
Would you tell me, please which way

I ought to go from here?"


"That depends a good deal on where

you want to get to," said the Cat.

"I don't much care where ---" said Alice.

"Then it doesn't matter which way you go,"


said the Cat.

'Alice's Adventures in Wonderland'

(Lewis Carroll)
The purpose of a pre-evaluation assessment is to help ensure that evaluation resources are
used efficiently to answer the right questions in a comprehensible and credible manner.
The assessment should enable those responsible for commissioning the evaluation study
to ensure that an appropriate focus and approach is being adopted. Pre-evaluation
assessment is not a substitute for evaluation, although it may provide rap+-id interim
feedback to the decision makers.
Implementation of Program Budgeting and the Financial Management Improvement
Program in an agency will require a more systematic approach to evaluation. Much of
the information for the pre-evaluation assessment and the subsequent evaluation study
will also become more readily available through these developments.

The pre-evaluation assessment explores and documents:


. the program objectives, expectations, origins, scope and assumptions of policy
makers and managers;
. the extent to which measurable performance indicators have been developed in
relation to the program objectives;
. data sources, data availability and reliability;

. the needs of the decision makers and the feasibility of meeting those needs;
. options and cost estimates for the evaluation; and

. the likely pay-off from the evaluation resources expended.


FIGURE2:STEPSINTHEPREEVALUATIONASSESSMENT

1. Define purpose of the evaluation:

. background to the evaluation


. audience for the report

2. Define the nature, scope and objectives of the

program:

. nature of problem addressed by program


. program authority or mandate

. program objectives

. actual or planned resource use


. available performance indica.0tor data
3. Analyse program logic:

. logical links between inputs, outputs,


outcomes and objectives

. key variables in program logic


4. Specify alternative ways of meeting program

objectives
5. Identify key evaluation issues:

. program rationale

. impacts and effects


. objectives achievement

. alternatives
6. Identify evaluation constraints:

. time, cost, expertise and credibility

7. Assess appropriate evaluation designs:

10
. management issues

. alternative evaluation methods


. /data collection issues

. data analysis issues

. aggregating benefits and costs

8. Develop strategy for evaluation study:


. study brief and terms of reference

. preliminary work plan

. consider input from other agencies


. consider composition of steering committee and evaluation team

. report likely pay-off

. identify resource requirements


Issues to be Addressed in Pre-Evaluation Assessment
Figure 2 lists the 8 steps which should be addressed in the pre-evaluation assessment.
The balance of this chapter elaborates on these steps, while Appendix B provides
summary checklists for each step. Depending on the complexity of the program, and on
whether there have been previous studies, some of these steps may in practice be
combined, abbreviated, or dispensed with altogether13.

Appendix C is a suggested outline for the Pre-Evaluation Assessment Report.

Step I - Define the Purpose of the Evaluation

"In practice, program evaluation rarely leads to more effective programs. When
evaluations are completed, government policy makers and managers usually find the
evaluations irrelevant to their information needs."
(Wholey, 1979)
The primary use for a program evaluation will generally be to assist decision making on
priorities between competing needs in portfolio resource allocation, and on improving the
achievement of objectives in designated priority areas.
If an evaluation is to assist decision making, it must be geared to the needs of the
decision makers and the pressures and constraints of the decision making process.

Failure to specify correctly the purpose of the evaluation may result in the evaluation
answering questions which the decision makers are not interested in asking or having
answered.
The fundamental step, therefore, in each and every evaluation is identifying the purpose
of the evaluation. This must go beyond a simple listing of "objectives", to an
understanding as to why the evaluation was initiated, who initiated it and how are the
results likely to be used. The degree to which this is formally documented depends on
the magnitude and complexity of the program under study.

The purpose and intended use for the program evaluation have a direct bearing on the
nature and quantity of information to be collected, the depth of analysis and the
pre4cision required in presenting results.

The primary audience for program evaluations will generally be the agency executive and
the relevant portfolio Minister. The concerns of other parties, however, will often be
important for the credibility and usefulness of the evaluation. These might include, for
example, the central agencies, Cabinet policy committees, agency managers responsible
for program delivery, other government or non-government agencies involved in program
delivery and program clients.
The work undertaken in this step may range from unstructured discussions with relevant
senior executives, which help clarify the issues in the mind of the evaluator, to formal
interviews with the key decision makers, analysis of documents 68and so forth.

12
Step 2 - Define Nature, Scope and Objectives of Program

"Analysts will neglect part of their job if they

indiscriminantly accept the characterization of


a problem as it is first and roughly presented.

But problems and issues should not be redefined

or reformulated merely to suit the analyst or

to fit his analytic tools ..."


(Hatry et al, 1976)

It is essential that both the evaluators and the intended users of the program evaluation
(those\those who are directly involved in 79the decision making) share a common
understanding of the nature and scope of the issues at stake. A pre-requisite to this is an
understanding of the program.

A starting point is the preparation of a description of the program element, outlining what
it is supposed to do, what it does, and what resources it consumes. This should be
consistent with material in the relevant "Portfolio Program Statement". Figure\3
indicates the program information which should be included in a pre-evaluation
assessment 8:report.
The source material for authorisation will generally be legislation, Cabinet Decision,
Ministerial Statement, or administrative guidelines. The pre-evaluation assessment
should check that the program activities are consistent with the mandate and that this
mandate is still relevant. Frequently the origins of a program lie in decisions of a
previous Government, sometimes decades in the past. It also happens that programs, in
time, develop in ways not envisaged in the initial authorisation; o9r that the social or
political environment has changed fundamentally from that in which the program was
conceived.
FIGURE3: PROGRAM(ELEMENT)DESCRIPTION

1. Authorisation: An outline of the Cabinet, ministerial or legislative basis of the


program, including details of authorised scope and any specific limits on operations.

2. Objectives: These will be output and outcome related, describing clearly what the
program is expected to accomplish, and stated, to the extent possible, in;= specific and
measurable terms.
3. Description: A short narrative explaining the needs which gave rise to the
program, the activities undertaken in respect of these needs and the outputs or outcomes
expected.

4. Relationship to Other Program Elements: The place of the program element in


the program structure and its relationship to other programs which serve the same or
similar objectives should be discussed briefly.
5. Resource Usage: Historical, current year and projected data should be <provided
for all associated administrative, staffing and other program costs (including imputed
costs of government assets or services which are provided free or with a subsidy).

6. Performance Indicators: Details of performance indicators or target standards of


performance which are used by the program managers or are approved or set by
corporate management.
Perhaps the most important information required in the program element description will
come from the program objectives. The source material again will include legislation,
Cabinet Decisions, Ministerial Statements, second reading speeches and program
documentation. A degree of caution is necessary, however, in interpreting such material.
Even where objectives have been explicitly stated, their relevance and priority change
over time, as governments change, as the economy and society changes and as the
program starts to affect society. But often the objectives will not be explicitly stated in
terms which permit measurement, or they will omit important sub-objectives, in which
case the evaluation team will have to develop an agreed objectives statement.

Definition of the objectives will often be an iterative process during the various steps in
the pre-evaluation assessment, and particularly through step 3, the logic analysis. It is
important that both program managers and senior policy advisors agree with the
statement of objectives which is to be used in the evaluation. It may even be appropriate
to seek the Minister's concurrence.

Associated with the significance and continued relevance of stated objectives is the origin
of the program, and especially its identification with particular events, agencies or
perspectives.

The program description, in addition to filling in this background detail, should clearly
identify the client group, the results expected from the program activities, and the
consequences of changes to the program. In developing an understanding of the program
it is important, in addition to consulting sources such as those mentioned above, to
interview senior departmental executives, and particularly the program element

14
managers. It will often also be desirable to interview line managers, local staff and
clients in order to get an adequate understanding of the program operation in practice.
The successful evaluation is one which proceeds with an understanding of the social and
political environment, which takes account of the constraints which affect the decision
makers and which therefore is better geared to their needs.

The source of the program element's relationship with other program elements will be the
portfolio program statement, and departmental documents such as the corporate plan.
Details of resource usage will desirably be available from the agency's management
information system; however often this will not include details of common service costs
or costs of services provided free of charge by other agencies. Details of some of the
performance indicator data will be given in the portfolio program statement; however the
program and line managers will be the main source of information in this regard.

Step 3 - Analyse the Program Logic

The purpose in analysing program logic is to understand and describe the processes by
which the program is supposed to achieve its objectives. The analysis is a study of the
presumed linkages between program inputs, intermediate processes, the outputs and the
ultimate objectives.
During the pre-evaluation assessment, the emphasis is on determining plausible causal
relationships rather than testing for the validity of the assumed linkages between inputs
and objectives. The analysis at this stage is based on common sense and professional
judgement.
In this step, we assess issues such as:
. what key assumptions underlie the program and how would its success be
affected if they are not valid;
. what tests or measures might be appropriate to ascertain the validity of these
assumptions;
. are there unintended secondary impacts;

. what aspects of program operation are likely to be affected significantly by other


parallel programs;

. on what aspects of the program should evaluation resources be focussed.

Figure 4 illustrates the form of a logic model for a typical government program.
Thus, for example, the essential logic of the training component of a hypothetical labour
market training program might be represented as shown in Figure 5 below.

The objectives of such a program might relate to assisting long term unemployed to get
jobs; to redressing unequal employment opportunities between social groups; to
facilitating structural change and so forth. Assuming that the objectives of the program
have been defined and at least the broad concept of the program operation articulated, the
analysis of program logic proceeds by:
sub-dividing the operation of the program into a manageable number of major
activities or phases (between 5 and 10 segments is usually appropriate);
identifying the inputs and intended outputs or outcomes (i.e. intermediate
objectives) of each of these major phases;

identifying significant secondary outputs/outcomes (whether desirable or


undesirable);
specifying the perceived logical relationships, and the implicit assumptions
underlying them, between the inputs, the outputs or outcomes and the
intermediate or final objectives;

identifying the evaluation questions which are of interest with respect to each
phase;

specifying performance indicators which can be used to monitor or measure the


efficiency or effectiveness of the respective program phases; and

confirming with the program managers and line managers that the model is a
realistic representation of what happens.

Figure 6 illustrates how these operations might proceed for the hypothetical labor market
training program. Five program phases are identified. The ultimate objectives, presumed
here to relate to reducing long term unemployment, enhancing equal employment
opportunity and meeting industry needs, provide the basis for intermediate objectives (eg
ensuring sufficient females enter the training program and the client group are the long-
term unemployed) and for the related intermediate performance indicators.
Sub-dividing the program in this way facilitates analysis by enabling the evaluator to
focus on any specific objective, to trace back through each phase how the program
attempts to meet that objective and to plan specific tests of the effectiveness of each
activity in relation to intermediate and final objectives.
Did the program fail to reach its target for the percentage of job placements which went
to females? Was the problem with the advertising, the selection process, the training
process or the marketing? Or was there a flaw in the essential logic which presumed, for
example, that unemployment among women related to their lack of appropriate skills
when, in fact, it lay in the biases of prospective employers?
Were there unanticipated undesirable secondary effects; unrealisable expectations among
entrants or employers; opposition from trade unions to the training program;
disillusionment among unsuccessful graduates? At which phase did such problems arise;
what performance indicators could be established to monitor them; what options are open
to management to minimize them?

Logic models such as this assist in reducing complex programs to a form which can be
assimilated and provide a frame of reference for discussion about, and analysis of, the
program. Inexpensive and "user-friendly" micro-computer project management packages
are available which can greatly facilitate this analysis of program logic.
The logic analysis approach is developed further and its use explained through several
case studies in the FMIP Handbook "Comprehensive Project Management".

16
Step 4 - Specify Alternative Ways of Meeting Program Objectives

This step applies particularly to new proposals but should also be considered in the
evaluation of existing programs as changes over time in the political, social or
technological environment often mean that previously rejected approaches could become
viable alternatives.

There is rarely only one solution to a problem or social need. The "optimal" solution
chosen can only be the best of the alternatives which are considered. Determining an
appropriate set of feasible alternatives which could achieve the program objectives is an
important task of this step.

It is particularly important that the search for options goes beyond the confines of the
agency's functional responsibility. For example a public works agency should not simply
examine construction solutions to a traffic problem, but also town planning solutions or
changes to laws and regulations.

Failure, or even the apparent failure, to consider an appropriate range of alternatives may
be used by those dissatisfied with the evaluation outcome to question the competence of
the evaluators and the credibility of the results.

The process of identifying alternatives will frequently lead to a reassessment of the


objectives because the way in which program objectives have been specified affects the
range of alternatives. For example, if Government has an objective to provide a specified
socio-economic group with a given standard of housing, there exists a variety of options
for achieving this, including:
. building of new government owned housing (single family detached dwellings,
group housing, flats etc);
. purchasing of existing housing by government;

. subsidising the purchase of new or existing housing by the target group;

. subsidising of rental charges for rented private sector accommodation; and


. subsidising of interest on loans to purchase housing (whether directly, through
low interest loans, or indirectly through tax subsidies).
Were the objective stated more narrowly, for example to provide public housing to a
specified socio-economic group, it is very likely that the program would not achieve best
value for the public dollar.

Since evaluation resources are limited, both the search for alternatives and their analysis
must have limits. The process of identifying a range of alternatives ultimately comes
down to judgement, based on the importance of the program (including its costs and the
degree of public interest), the decision context and the time and resource constraints
facing the evaluators. The following points in figure 7 are offered as a guide to selecting
alternatives.
FIGURE7:SELECTINGALTERNATIVESOLUTIONSFORANALYSIS

1. Review the history of the program and the reasons for the current evaluation (eg
identify the problem to be addressed by the program; the symptoms of that
problem which give rise to public/political concern; and other programs which
currently address, or in the past have addressed these matters).

2. Review reports, academic papers, professional journals, government files etc (eg
identify approaches by other governments or other countries to address these
problems; propositions advanced by professionals working in this field; and
"popular" solutions advanced by public figures, the press, interest groups, affected
individuals).

3. Use structured problem solving techniques, among the evaluation team, with
professionals working in the area and with interest groups. (Techniques such as
search conferences, DELPHI process, and priority scaling, can be very useful in
this step).

4. Undertake initial screening, perhaps supported by guidance from the decision


makers (eg the Minister) or by judgement based on preliminary "informal"
analysis or on work done in other contexts, to reduce the number of alternatives to
no more than five or six.
5. Report, perhaps only in a paragraph or two, on all significant options discarded in
this screening process, stating the reason for not proceeding further with their
analysis; and

6. Proceed to detailed analysis of remaining options.

18
Step 5 - Identify the Key Evaluation Issues

Given that evaluation resources are scarce, the pre-evaluation assessment should identify
key evaluation issues. They are determined mainly from an understanding of the purpose
for the evaluation (step 1) and of the nature of the program (step 2). The analysis of
program logic (step 3) will highlight both the critical assumptions on the relationship
between inputs and outputs/outcomes and tache most sensitive variables in the program.
Assessment of alternatives (step 4) will suggest whether review of options is necessary.
Figure 8 lists the key evaluation questions.
FIGURE8:KEYEVALUATIONQUESTIONS

. PROGRAM RATIONALE

- Does the program make sense?


- Are the objectives still relevant?

- Are they still high priority?

. IMPACTS AND EFFECTS

- What has happened as a result of the program?


. OBJECTIVES ACHIEVEMENT

- Has the program achieved what was expected?

. ALTERNATIVES
- Are there better ways of achieving the results?

20
In addition the following questions are often relevant:
. Have matters relating to the program have been questioned in the Parliament or
public forums.
. Are the evaluation results likely to influence Government's decisions: Priority
should be given to those matters which will most likely be affected by the
outcome of the evaluation.

. Is the program a major user of resources, or will it have important future impacts
on quality and distribution of services.

. Has the program been subject to review in the past three to five years or have
relevant circumstances changed significantly since the program was last
reviewed.
It is important for the credibility of the pre-evaluation assessment that the issues to be
addressed are explicitly identified and that they do relate to the objectives of the
evaluation. Where a decision is made not to perform a detailed analysis of certain
aspects of a program, the reasons for this should be stated.

Step 6 - Identify the Evaluation Constraints

Related to the key evaluation issues are the constraints confronting the evaluation. The
most common constraints are time, cost, expertise and credibility. The pre-evaluation
assessment report should consider these in developing the brief for the evaluation study.

Time influences the range of activities that can be undertaken in the pre-evaluation
assessment. It demands a trade-off between comprehensiveness, on the one hand, and
usefulness on the other. If the Government has already made a decision, a
comprehensive but late evaluation may be a complete waste of effort. Worse, it may
bring the evaluation process into disfavour with government and management alike.
Cost determines what can be done. In an ideal world of unlimited resources a figure of
0.5 to 2 per cent of total program funds would generally be necessary to evaluate fully a
multi-million dollar program. In reality far less is allocated. The more limited the
resources the more important it is to focus on issues where the pay off is highest.

A lack of expertise, can be a constraint both in the management and in the technical
aspects of evaluations. Lack of technical expertise can be compensated by seconding
staff from other agencies or by use of consultants; lack of managerial expertise in the
evaluation field is more difficult to redress in the short term. A comprehensive training
package on evaluation has been prepared under the Financial Management Improvement
Program to address these problems.
Credibility may be an important factor in the acceptability of an evaluation. In particular,
if the issue is highly contentious the perceived independence of the head of the evaluation
team may be more important than the technical accuracy of the results. In such instances
the evaluation could also be undertaken by an independent team, rather than being done
"in-house", even though the technical expertise and integrity of the agency is not in
question. Similarly, credibility in the eyes of line managers is often important, and this
will be enhanced if the evaluation team includes local representatives.
Finally, factors in the political and social environment often constitute significant
constraints and should also be considered in the pre-evaluation assessment.

No guidelines can be given on how to respond to particular constraints. It calls for a


commonsense approach aided by experience.

Step 7 - Assess Appropriate Evaluation Designs

Having identified the characteristics of the program we wish to analyse and the related
changes we wish to measure, the next step is to identify the best way to measure them,
consistent with time, resource and other constraints.
In the pre-evaluation assessment, the emphasis is on assessing the pros and cons of
alternative evaluation designs, rather than doing detailed measurements. It may,
however, be desirable to undertake pilot studies to ascertain the appropriateness of one
method over another.
The evaluation design in essence requires the following:

. definition of the evaluation questions


- this is the output of steps 3 to 6;
. definition of the activities of, or the changes due to the program which need to be
measured
- these should follow logically from the evaluation questions;
. identification of data sources;

. choice of the method(s) by which activity levels or degree of change attributable


to the program will be measured;

. choice of methods of collecting the data;


. choice of methodology to analyse the data;

. the process of synthesising the analysed data into comprehensible information for
decision makers.

This section focuses on the four "technical" dimensions which should be considered in
selecting the evaluation design, namely:

. the means of assessing changes attributable to the program rather than to non-
program factors;

. the collection of relevant data;


. the analysis of the data; and

22
. the aggregation and presentation of this data in a manner which enables
comparison by the decision makers.
Estimating the Extent of Change:

There are various data collection designs, discussed below, which aim to determine the
extent of change attributable to a program. The designs differ in their ability to isolate
the effects of the program from other possible changes not related to the program. Figure
9 summarises the main characteristics of the various evaluation designs.

Such designs include:


(i) Case study with "after" measurement and no control group

Measurements are taken of the population affected by the program at a single point in
time when the program has been implemented. The yardstick for comparison is generally
the "planned" or "target" performance. This approach is inexpensive, but suffers from
severe deficiencies:

. since no base-line data exist the basis of any planned performance or target is
questionable and the amount of "change" is uncertain;

. even if "change" has occurred, there is no empirical basis for ascribing its cause to
the program.
This approach may be a necessary first step where a new program is initiated to combat a
problem which is recognised but not quantified. For example the 1986 report of the
Financial Management Improvement Program included "snapshots" of the state of public
service management. In the absence of any historical data base, however, only
qualitative statements could be made concerning the degree of change since the FMI
program began.
(ii) Case study with "before" and "after" measurements and no control group
This design does enable an estimate to be made of the change which has occurred since
the program was initiated, and is probably one of the more commonly used evaluation
approaches. But it also suffers from deficiencies:

. in the absence of supporting evidence, there is no basis for ascribing the change to
the program;

. the data may reflect short-term fluctuations, or the results of other interventions
rather than the effects of the program;

. the "after" measure may not capture the longer-term impacts.

(iii) Time series and econometric analysis


In the time series approach the underlying trends over time are analysed for the key
indicators, and statistical projections are made of what the situation might have been in
the absence of the program. Statistical methods are then used to estimate both short run
and longer term impacts of the program
Econometric techniques are statistical methods that estimate, on the basis of historical
data, the relationships between social and economic variables. These are also commonly
used to establish estimates of what might have been, in the absence of the program.
Because they consider more variables than simple time series analysis, the "predictive
value" of econometric analyses gives a greater degree of confidence. Time series and
econometric analyses provide greater and more reliable information content than the
previous two approaches, and are relatively inexpensive provided the necessary data is
available. They require a degree of technical expertise, however, and their reliability
decreases the longer is the time span for the projections.

(iv) Systems modelling


Systems modelling seeks to model mathematically the operations of a program,
emphasising actual causation rather than the statistical correlation which forms the basis
of time series and econometric analyses. The development of such models is often
expensive and requires significant expertise. They are usually more reliable } in
technical rather than social areas.
(v) Pilot study with "after" measurement of pilot and control group

By adding a control measurement to the simple case study (design (i) above) it is possible
to make an estimate of the degree of change caused by the program on the assumption
that any difference between the pilot and the control is solely due to the program. There
still remains the doubt, however, that the population characteristics between the pilot and
control groups might have differed from the outset and be responsible for at least part of
the change.

(vi) Quasi-experimental design ("before" and "after" measurements of both control


and pilot group)

This method involves two or more measurements over time on both the pilot program and
the control group. Both rates of change and amount of change between the two groups
are then compared. It thus protects to a large degree against changes which might have
resulted from other factors. The major difficulty with this approach is ensuring that the
control and pilot groups have similar characteristics. Rigorous quasi-experimental
designs, coupled with a thorough attempt to determine causality, probably give the
highest level of confidence that can be achieved in everyday evaluation. It can, however,
be time consuming and costly.

(vii) Experimental design

This is the most rigorous, but also most costly and time consuming evaluation design. It
is similar to the quasi-experimental design in that specific changes in a pilot and one or
more control groups are analysed; however it differs in the way these groups are chosen.
In the experimental design a target population is randomly assigned either to the pilot or
the control group, thereby ensuring that any initial differences between the groups can be
described through precise statistical parameters. Because of time, cost and skills
constraints, such an approach is rarely used in public sector evaluation.

Figure 10 lists the conditions under which the more expensive experimental and quasi-
experimental approaches are likely to be appropriate. Figure 11 illustrates graphically
the conceptual difference between the outcomes of different evaluation designs.

Pilot Programs

24
Where new programs are being considered, there can be significant advantages in starting
with a pilot program rather than with full implementation. This approach permits
evaluation (in particular, the use of quasi-experimental and experimental designs) while
avoiding the risk of large scale program failure.

The purpose of this section has not been to equip readers to undertake the detailed design
of an evaluation, but to alert them to the issues involved. Evaluation design is a crucial
step upon which can depend the credibility of the entire evaluation. Further detailed
discussion of evaluation design issues is included in the US General Accounting Office
"Methodology Transfer Paper 4, Designing Evaluation" (G.A.O., 1984). Other useful
texts include: Bogden & Taylor (1975), Cook & Campbell (1979), Judd & Kenney
(1980), Keppel (1982), Kidder (1981), McCleary & Hay (1980), Posavac & Carey (1980)
and Rossi & Freeman (1982).
FIGURE10:CONDITIONSUNDERWHICHQUASIEXPERIMENTALAND
EXPERIMENTALDESIGNSAREMOSTLIKELYTOBEAPPROPRIATE

1. There is likely to be a high degree of ambiguity as to whether outcomes were


caused by the program if some other evaluation design is used.
2. Some citizens can be given benefits through a pilot program different from
normal entitlement without significant controversy.

3. There is substantial doubt about the effectiveness of the program.

4. The new program involves large costs and a large degree of uncertainty and the
risk in funding the program without a pilot study is likely to be substantially
greater than the cost of the pilot.
5. A decision to implement the program can be postponed until the pilot is
completed.

6. Experimental conditions can be maintained reasonably well during the pilot study
period.
7. The findings are likely to be generally applicable to a substantial proportion of the
population of interest.
8. Client consent for participation in the pilot study is not required or, if it is, can be
obtained without invalidating the study.

(Based on Hatry et al (1983)).

26
DataCollection:

In considering evaluation designs it is also necessary to determine how data will be


collected. There are five broad approaches available for data collection: the use of
existing records and statistics, project monitoring reports, special collection efforts,
mathematical modelling and simulation, and "expert" judgement. In general it will be
necessary to use more than one of these approaches in any given evaluation.

(i) Existing records and statistics

Government, tertiary education and business organisations usually maintain statistical


collections on a range of demographic, economic, commercial, environmental and social
factors. These may be relevant to the evaluation as surrogate measures for variables on
which statistical data is not currently available, or as an aid to checking the
representativeness of sample populations. Previous program evaluations of similar fields
may also yield data which can be reworked.
Because data collection is costly, it is wise to ascertain whether existing data can be used
to supplement or replace special data collection.
In using existing data, care must be taken to check the accuracy of the data, its
consistency of definition over time, any adjustment made to the original data (smoothing,
re-basing etc), population characteristics and so forth.

Even if the conclusion is that new data is required, the analysis of existing data may be
warranted to give quick, but tentative answers to key questions which will be answered
more precisely in due course. For example, this may help refine the evaluation design by
highlighting the more sensitive parameters, or it may permit interim, but qualified, advice
to decision makers on performance.
(ii) Project Monitoring Reports

For existing Commonwealth programs, agencies should have in place Management


Information Systems which capture resource input data (staff resources, administrative
costs and specific program expenditure), throughput data (eg applications received and
processed) and output data (eg clients seen, successful applications, payments made).
Progressively, agencies should identify key evaluation data requirements and, as far as is
practical, build their collection into standard work procedures.
Before using such data, the evaluation should make an assessment of its reliability. Data
collected routinely by program staff which is not an automatic by-product of normal
work processes, and which is not of direct use to local managers, is often unreliable.

(iii) Special Collection Efforts


Special collection efforts range from questionnaire surveys which rely, inter alia, on
subjective responses of people, to precise measurement of processes by scientific
equipment (eg measurement of traffic volumes, soil acidity, biological oxygen demand of
water). In an ideal experimental situation such data collection efforts would normally
consist of three discrete activities:

. base-line survey to determine the situation at the beginning of the project;


. re-survey (using same 'population') after program implementation, allowing
reasonable time for the assumed logic of the program to work;
. special topic survey designed to analyse specific problems which occur during the
program, in particular, significant or unexpected deviations in program operation
from the assumed logic.

In many instances the detailed evaluation design occurs when the program is already in
operation and base-line surveys are not possible. In such situations it may be possible to
develop estimates of the original conditions through analysis of related existing data,
collected for other purposes.
Social surveys are an important means of collecting evaluation data. Their usefulness,
however, depends on the rigour of their design and execution. The choice of survey
strategy (eg self administered questionnaire, telephone or personal interview), the design
of a questionnaire, the determination of the sample population and sample size and the
training of survey interviewers are all tasks which require expertise. Where significant
evaluation issues hinge on the survey outcome, specialist advice should be sought.

(iv) Simulation and Modelling


Often it is feasible to develop physical or mathematical models of the operation of a
program. Examples include hydrologic models, such as a physical scale model of
runway extensions into coastal waters where the impact of construction works on tidal
patterns and sediment movement can be simulated; road transport models, such as a
computer based mathematical model where the impact of changes in road improvements
on travel times, or traffic volumes on particular roads can be simulated. Such approaches
can be a valuable source for estimating both former and future states of the system.
Regard, of course, must be had to the assumptions underlying the model and the dangers
of extrapolating too far.
(v) Expert Judgement

Not all change can be directly measured. In social fields the assessment of qualitative
changes often depends on "expert" judgement. In such cases it is important to ensure that
assessment approaches and rating procedures are designed such that the assessments of
different "experts" are done in a comparable way and that results are reproducible.

Data Analysis:
Data processing and analysis are time-consuming exercises requiring specialist skills and
a high level of technical rigour. This step is often the weakest link in evaluation. Not
infrequently large amounts of data are never processed, and often only a fraction is
analysed in time for the decision making process.

Of greater concern is that analysis is often limited to a very simplistic use of basic
statistics. While simple statistical analysis is appropriate in some aspects of analysis, and
especially with routine program monitoring, its use in situations where there are multiple
variables can lead to misinterpretation of the data and to invalid conclusions and
recommendations.
In particular there appears to be little awareness of or skill in the use of numerical pattern
recognition or classification techniques, such as cluster analysis, factor analysis, multi-
dimensional scaling, discriminant analysis and so forth. "The Fascination of Statistics"

28
(Brook & Arnold, eds, 1985) is an excellent introduction to this field for the non-
mathematician; while for more advanced insights, volume 2 of the "Handbook of
Statistics" (Krishnaiah and Kamal, 1982) is a valuable text.
Analysis issues should be considered in the evaluation design stage, so that the quality
and volume of data are geared to the use to which they will be put.

Step 8 - Develop Strategy for Evaluation Study

The product of the pre-evaluation assessment is a report to management, along the lines
suggested in Appendix C ("Suggested Outline for Pre-Evaluation Assessment Reports")
which recommends the preferred approach to the detailed evaluation study, and indicates
the likely costs involved and the resultant payoffs.

In recommending the strategy for the evaluation study, the following matters should be
dealt with:

(i) the terms of reference or brief for the evaluation study; this would include a clear
and unambiguous statement of the purpose of the evaluation and the authority to
do the study; key issues to be addressed; the specific questions to be answered;
the audience for the report; and the timing and decision context for any
recommendations or conclusions.

(ii) a preliminary work plan which indicates:

. how the purposes of the study are to be achieved: i.e., an explanation of


the specific tasks to be done and the evaluation designs suggested for
these tasks;
. when each and all tasks are to be completed;

. what evaluation products are required; and


. who are the recipients of the reports.
(iii)a clear statement of procedures for review of progress and of any key decision points;

(iv) a clear statement of time and resource constraints, and of procedures for
amending these;

(v) involvement of or contact with other agencies;

(vi) the composition of the steering committee and the evaluation team;
(vii)official points of contact among program officials and, where appropriate, clients;
and
(viii)an outline of procedures for amending the approved evaluation plan if that is
subsequently seen to be desirable.
CHAPTER 3
UNDERTAKING THE EVALUATION STUDY
A Checklist summarising the steps in an Evaluation Study is at Figure B 10.

Certain tasks should precede the major commitment of staff or other resources for an
evaluation. These include formal executive agreement to the brief or terms of reference
for the evaluation study; preparation of a detailed work plan; selecting the study team;
and establishing lines of communication.
If a pre-evaluation assessment has been undertaken, much of the background for this
work will have been done. If not, then the eight steps in Chapter\2 should be addressed
in the first stage of the evaluation study.

Detailed Work Plan

The development of a specific plan of action is required to ensure that the method of data
collection and analysis is given adequate attention and to provide a basis for quality
control and control over timing and costs. The work plan should be flexible in order to
respond to the unexpected.
The first step in developing the work plan is a quick review of the pre-evaluation
assessment. It should address questions such as:

. have legislative, economic or other developments since the pre-evaluation


assessment affected the priority or relevance of the key issues identified?
. does the timetable proposed still meet the decision makers' requirements?

. is the proposed evaluation design suitable in the light of the time, cost, and skills
which are available for the project?

The next step is to develop a schedule covering:

(i) detailed analysis of the program logic, including inputs, activities and processes,
outputs, impacts and effects and their interrelation; (This goes beyond identifying
"possible" causal relationships as discussed in Chapter 2, step 3, to a rigorous
testing for the validity of the assumed linkages between inputs and outputs. This
may require extensive data collection, statistical analysis, modelling, etc).

(ii) a description of proposed evaluation methods including:


. performance indicators

. data sources

. sampling procedures and sample sizes


. analytical techniques to be applied

30
. comparisons to be made

. quality control procedures


(iii)identification of the professionals who will undertake the study and the time each is
scheduled to be involved;

(iv) nomination of steering committee, study director, team leaders, noting their
reporting relationship and areas of responsibility;

(v) identification of important milestones, including steering committee reviews and


target reporting dates; and

(vi) an outline of the evaluation products required.

Principles for the Conduct of an Evaluation Study


A high quality evaluation should meet the criteria of being useful, feasible, ethical and
accurate.
Useful: It should be addressed to those people who are involved in or are responsible for
the program and help them make decisions concerning strengths and weaknesses in the
program. It should emphasise the issues of most importance to them and provide timely
feedback.

Feasible: The evaluation should be conducted efficiently without major disruption, and
costs should not be out of proportion to the cost of the program. It should take into
account political or social pressures which might undermine the evaluation.
Ethical: It should be founded on explicit agreements that the necessary co-operation will
be provided, that the rights of the various parties will be respected and that the findings
will not be compromised. It should provide a balanced report which reveals both
strengths and weaknesses.
Accurate: It should clearly describe the purpose, activities and processes of the program.
It should reveal the basis of the evaluation design, acknowledging limitations which may
affect the confidence with which conclusions can be drawn. It should be controlled for
bias and should provide valid and reliable findings.

Practical Issues in Undertaking the Evaluation Study

In the performance of any evaluation, problems are frequently encountered. Some of the
most common issues that can arise are discussed here.

Collecting relevant data: In undertaking evaluation studies, there is often a temptation to


collect any data which might be of possible use. Questions which should be applied to
any data collection effort in order to minimise irrelevant data are:

. exactly what question is this piece of data intended to answer?

. what analytical model requires it?


. what calculation cannot be done without it?
. will it significantly affect the reliability or credibility of the conclusions?

Testing the reliability of data: An attempt should be made, at the time the data is first
generated, to estimate whether it is reasonable. This is especially important when
complex calculations are involved. For example, how does the answer compare with
rough calculations or with intuitive judgement?

To help ensure the integrity of the information collection procedures and the relevance,
accuracy and completeness of the information collected, effective quality control
procedures should be implemented and maintained during the evaluation study.
Although the nature of the mechanisms for quality control may vary with the particular
circumstances of a study, some form of the following is considered good practice in the
conduct of evaluation studies:

. pilot testing of information collection methods;

. using more than one source of information;


. monitoring the collection of information;

. editing the information collected; and

. should sampling be used, implementing procedures for handling non-response


and attrition.
Protecting the confidentiality of information about individuals: where data is collected
about individuals, it is important to ensure that they are not identifiable in the study
reports or in insecure files.
Documenting and referencing: Documenting appraisals of results and assessments of
alternatives is important. The documentation should be sufficient so that another
individual or team could reconstruct parts of it or use it in another study. Basic
assumptions should be clearly identified and recorded. The rationale for using indirect or
surrogate measures should be stated explicitly. Oral interviews should be summarised in
writing, dated and filed. Original documents should be retained. Complete files of
relevant raw data and working papers should be kept and filed so that they can be
retrieved easily for review. Information which cannot be readily filed should be
adequately described and referenced in the files.

The study team should design, use and save work papers. Well designed, clearly labelled
and fully legible work papers offer an important insurance policy to the study team.
Work papers should be dated and signed so that a clear trail is established as to who did
what and when. A review of the work papers will show whether the study team has been
thorough or whether they may have overlooked an important fact or element of a problem
and that all similar elements of the evaluation have been treated consistently. The work
papers should be checked against the evaluation plan to assure that the plan was carried
out or that changes are fully explained.
Adhering to time schedules: Effort should be made to anticipate possible delays and the
time schedule should make allowance for unforeseen delays. Most complex tasks are
almost invariably harder than originally anticipated and, therefore, take longer than
estimated. In complex studies, detailed schedules for component parts may be necessary.
A proposal to expand the scope of the study or to do more work in order to sharpen the

32
results should be carefully justified, particularly if it involves risk of delay in the
schedule.
Leading and coordinating the study team: It is essential to maximise the interaction
among the study team members. The coordinator should ensure easy access to the
decision makers who expect to use the evaluation. A continuing dialogue should help to
make the evaluation products more useful and better accepted. The coordinator also
needs to impress on the team the importance of maintaining an open, honest, and
amicable relationship with the personnel of the program under evaluation. It is often
easy for program people to frustrate a study if they feel threatened, antagonised or
slighted.
Using computer-based models: For most large-scale, but routine, quantitative
manipulations (statistical analysis, cluster analysis, linear programming, etc), reliable
user friendly computer "packages" are available and should be used. When a program
has many complex interrelationships, and the effects of altering the assumptions or data
are not obvious, a specially designed, computer-based model may facilitate the study. In
such cases,! creative computer programmers are extremely valuable additions to the
study team.

The structure and operation of any model, however, must be reasonably apparent to
decision makers who want to use the study: both its output and workings must be readily
understandable to them. Usually, this can be accomplished by carefully diagramming the
components of the model and explaining how each component operates and interacts
with the other. Users of the study will normally accept the computational competence
of the model only if the logic makes sense to them and they have confidence in the study
team.
Communicating Study Results

Communication of the findings of the evaluation is an extremely important aspect of the


study. If the client cannot understand, misinterprets or is unconvinced by the conclusions
then the evaluation effort is largely wasted. Chapter 4 discusses this aspect in more detail
and Appendix D presents a "Suggested Outline for an Evaluation Report".
CHAPTER 4
THE EVALUATION REPORT
It is crucial when preparing an evaluation report to describe the procedures adopted for
the evaluation study and present recommendations in a form which can be readily
examined and considered by decision-makers. A report runs the risk of failure if it:
. concentrates on issues which are of low priority to its audience(s);

. lacks logic or consistency in presenting information;

. is verbose or obtuse;

. includes criticism which appears gratuitous or unfair; or


. lacks clear justification for contentious conclusions.

The remainder of this chapter consists of a series of questions and hints which members
of evaluation study teams might consider in preparing a report. The points are arranged
under headings recommended for the various sections of a report. It should be stressed
that the report format (which is set out in greater detail in Appendix D) is a suggested
format only; different studies may require quite different forms of presentation. Some of
the material in this chapter may appear banal; unfortunately, experience of report-
reading indicates that basic information can be left out.

Table of Contents

. Do the chapter titles capture the report's major points?


. Can the logical flow of the report be seen by looking from chapter titles to section
captions?

. Does the table of contents include appendices, glossaries, figures, tables and a
bibliography?

. Could the reader use this table as an accurate index?

34
Executive Summary

. Is this material concise and consistent with the text? Does it fairly represent the
text as a whole?

. Does the summary have short paragraphs and sentences, third person construction
and page or chapter references to the text?

. Are all recommendations included and clearly identified?

Introduction

. Has the study team indicated its major assumptions in carrying out the
evaluation?

. Are limits on the study') (for example, limits of time or resources) set down
together with the consequences for the study that these limits entail?

The Substance of the Report

(A) Program (Element) Description


. Are the objectives of the program and its logic clearly set out in a fashion
intelligible to the non-expert reader?
. Are the major concepts used throughout the report properly defined?

(B) Summary of Analyses Conducted


. Are data collection and analysis procedures set out in a concise form? (More
detailed documentation can be reserved for appendices).

. Is data provided in accessible formats? (Tables of figures might, for example, be


supplemented by graphs).

. Are any limitations on data clearly explained?

Findings and Conclusions

. Is material derived from other sources properly attributed?

. Are findings and conclusions organised in such a way that the relationship
between them is clearly shown?

. Have the opinions of parties with an interest in the report's outcome been
reasonably represented? )Are responses to these opinions sufficient?
Recommendations

. Are recommendations specific and addressed to all of the parties that need to act?
Are priorities identified?
. Is it apparent from the way they are presented that all recommendations derive
from the substance of the report?

. Where changes in legislation are recommended, is a description of the wording of


these changes proposed?

Resource Issues

. Are full resource costs and the priority of the program (element+-) within the
portfolio discussed?

. Are the consequences of providing different levels of resources adequately


treated?

. If additional resources are considered desirable, is the matter of full offsetting


savings within the portfolio discussed?

Appendices

. Do the appendices serve to amplify the text and would they be informative to
readers?
. Is the bibliography arranged to be useful to a reader wanting to make further
enquiries about the program and its evaluation?
. Are the efforts of the contributors to the report properly acknowledged?

36
CHAPTER 5
REVIEWING THE ADEQUACY OF THE EVALUATION
Agencies should monitor and review the effectiveness and efficiency of evaluation tasks
undertaken by or for their organisation, both to ensure accountability for the significant
resources consumed by this activity and to provide management information for
upgrading the service provided by evaluators. This review process will also assist in
tracking the fate of evaluation recommendations. This chapter provides a series of
questions which might be asked by management to assess the quality of an evaluation.
Issues and Approach: Assess the soundness of the evaluation approach and the need for
additional information.

Questions include:
. What difference did the evaluation make or do we think it will make? How does
this compare with what we initially expected?
. Did we do the right evaluation? Would an evaluation with a different focus have
been more helpful? What were the strengths and weaknesses of our approach?
. Did we succeed in answering the questions we set out to answer? Was the
information gathered convincing? How could we have been more convincing?
Did we gather too little or too much information?

. Will our analyses and conclusions stand up to expert scrutiny? Have we


documented our work thoroughly? Have we suitably qualified any areas of
uncertainty?

. Were program, regional office and line managers, whose functions were reported
on, satisfied that thei1r views were understood by the evaluators and correctly
reported?

. Is there additional information that would be useful to the decision makers? How
much work would be involved in developing this information?

. What has to be done to implement the recommendations? Who has responsibility


for their implementation? Who will monitor their implementation? Is there
further work we could do that would make implementation much more likely? Is
it worth it?
Client Satisfaction: Assess the value of the evaluation to the client.

Questions Include:
. Were the client's needs adequately identified?

. Is the client satisfied with the evaluation product?


. Does the client need help in understanding the work or in implementing any
recommendations?
Timing: Assess whether the evaluation was received by the decision-makers on time.

Questions include:

. Did we meet the agreed upon time35frame for completion? Were critical tasks
completed on time?

. Was too much or too little time spent in pre-evaluation assessment and detailed
planning? Given the planning work that was done, could the evaluation phase
have been done more quickly?

. How would the time involved in preparing work products have changed if there
were more or fewer people assigned to the project?

Cost and Staffing: Examine the project in terms of economy and efficiency. In particular
assess implications for training46 and recruitment.

Questions include:
. Did we complete the evaluation within the estimated budget?

. Could the objectives of the evaluation have been fulfilled at less cost? What
aspects of the project were most and least cost effective?
. Were the costs out of proportion to the benefits resulting from the evaluation?
. Did management arrangements work smoothly? If there were problems, are
changes in policy or procedures needed to correct them?
. Were the talents and experience of the staff suite5d to the project?

. Does experience with this project suggest matters that need to be considered in
our training or recruiting activities?

38
APPENDICES
A. Types of Evaluation
B. Checklists for Pre-Evaluation Assessment and Evaluation Study
C. Suggested Outline for Pre-Evaluation Assessment Reports
D. Suggested Outline for Evaluation Reports
APPENDIX A
TYPES OF EVALUATION
The term "evaluation" means different things to different people, and the terminology
used to describe different evaluation activities varies from discipline to discipline. This
understandably can result in confusion. In order to clarify some of the meanings, this
appendix discusses very briefly seven broad categories of evaluation. It also outlines
how evaluation fits into the program planning and d8:evelopment, implementation,
operation and review cycle, who the evaluators are and when they might conduct
evaluations.

The Main Categories of Evaluation are:

First it is useful to draw a distinction between evaluation which has a strategic focus and
that with a tactical focus. Strategic evaluation is concerned with longer range planning,
management or resource allocation issues; while tactical evaluation is undertaken in
immediate support of operations.
Under the heading of strategic evaluatio9;n there are three broad sub-divisions:

. ex-ante evaluation;

. effectiveness evaluation; and


. meta-evaluation (or evaluation of an evaluation).
While under tactical evaluation there are a further four sub-divisions:

. performance monitoring;
. implementation analysis;
. compliance audit; and

. efficiency audit.

Of course the distinction between a strategic and tactical focus may sometimes be
blurred. For example a needs study (strategic) may also include detailed analysis of how
to phase the :program in (tactical).

Similarly performance monitoring (tactical) often provides the essential information base
upon which effectiveness evaluation (strategic) is undertaken.

Performance Monitoring: This is a day-to-day systematic review activity, that involves


ongoing oversight of the relationship between the external operating environment,
program inputs, processes, outputs and outcomes, in terms of progress towards targets,
compared with past performance or trend projections. Performance monitoring has a
tactical focus in that it assists ongoing program management by identifying areas
requiring corrective action, for example due to changing external conditions, and by
identifying areas requiring further in-depth evaluation. It also has a strategic focus in

40
providing information for ministerial, parliamentary and public scrutiny of program
operation and achievement; and in maintaining information required for in-depth
evaluation of program effectiveness.
Implementation Analysis: This provides guidance on the phasing in of a program; it
should be available before a decision is made to proceed. Particularly with service
delivery programs, initial operating difficulties and consequent political embarrassment
can be minimised by devoting resources to researching or analysing how best to get the
program operational.

Compliance Evaluation: This type of evaluation is primarily concerned with analysing


the design, development, implementation and operation of agency systems, procedures
and controls, and with the extent of compliance with legislative, central agency and
departmental directions. In practice the distinction between the traditional audit and
efficiency evaluation has become imprecise.

Efficiency Evaluation: This approach is the most common form of evaluation adopted in
government agencies. It is concerned not so much with the worth of the program, but
with testing or analysing the program operations, procedures and use of resources in
order to identify ways of improving program operation. Approaches include analysis of
management strategies and of interactions among persons involved in the program, and
assessing the outputs of the program in relation to the cost of resource inputs. In many
respects, efficiency or process evaluation is an extension of sound administration; that is,
the continuous checking of a program's operations to ensure that it is working well.
Ex-Ante Evaluation: "Ex ante" simply means "before". Evaluation activities should take
place prior to the decision to commence a program. Such activities include research or
studies to estimate needs, to examine the adequacy or feasibility of alternative solutions,
and to assess feasible options for implementing the proposal. Needs assessment,
feasibility analysis, cost-benefit analysis and social or environmental impact studies fall
into this category. The results of ex-ante evaluations should provide guidance to decision
makers for developing the broad program strategy, refining program proposals,
determining objectives and performance indicators, and determining the appropriate level
of resources.

Effectiveness Evaluation: Whether a program is achieving its objectives and whether the
objectives themselves are still relevant and of high priority are factors considered in
effectiveness evaluations. Where efficiency relates to the question "... are we doing
things right?", effectiveness questions "... are we doing the right thing?". The purpose of
such evaluations is essentially to assist decision making on resource allocation to and
between programs and, more specifically, on whether resources available to a particular
program should continue at current levels, be expanded or reduced.
Meta Evaluation (Evaluation Audit): Resources applied to evaluation, as with resources
devoted to other activities, must be justified in terms of their contribution to program
objectives, the goals of the agency and ultimately of the government. The evaluators
themselves must be subject to audit. Evaluation audit ranges from analysis of how
decision makers have actually used evaluation results (and hence, indirectly, how well
the evaluators pitched their product to the market), to review of the procedures,
assumptions or data accuracy of the original evaluation. Little formal evaluation audit is
currently done by Commonwealth government agencies.

Figure A1 outlines for each of these seven broad categories, a range of other evaluation
terms commonly used in various disciplines.
WHO EVALUATES AND WHEN?
With each type of evaluation there is a common denominator, namely, the conversion of
data into information to assist decision making. Broadly speaking this entails:

. collecting relevant data

. measuring change
. analysing causality

. making judgements

. reporting conclusions

Decision makers need such information in many different circumstances: when


examining social needs, when exploring alternative ways of addressing them, during the
operation of a program, and after the event to assess what lessons were to be learnt from
a program.
Figure A2 illustrates, firstly, the phases in a program's life-cycle, namely planning and
development, implementation, operation and review. Secondly, it shows the factors on
which evaluation might focus at these different stages, namely:

. resource inputs, agency processes and environment;


. outputs and efficiency;

. outcomes;
. problems and social needs; and

. development of solutions and implementation strategy.

It should be noted that, as programs are rarely conceived in a vacuum, but generally
evolve from on-going activities, the boundary between review of the outcomes of an
existing activity, and assessment of needs in respect of a new program, will often be
blurred.
Figure A3 reproduces this cyclical process in linear form, so that the particular focus of
each of the various types of evaluation can be highlighted.
Several points are relevant in relation to this figure. First, the boundaries where one type
of evaluation leaves off, and another begins are in practice somewhat fuzzy. Secondly,
some disciplines may define the boundaries differently from the definitions implicit in
figure A1. Thirdly, it should be remembered that there is considerable "policy review"
activity in every agency, which may cover any of the evaluation types detailed. Because
they do not follow strict "scientific" methods (generally because of lack of time or
resources), such policy reviews are not usually considered to be "evaluation". This is
somewhat short-sighted, as it may in fact be a misallocation of resources to insist that
every evaluation follow the type of formal "scientific" procedures discussed in this
handbook.

42
Figure A4 summarises the characteristics of the different types of evaluation in order to
highlight, among other things, the users, timing and uses of each type of evaluation.
CHECKLISTS
PRE-EVALUATION ASSESSMENT AND EVALUATION STUDY
THE FOLLOWING CHECKLISTS SUMMARISE THE STEPS INVOLVED IN
UNDERTAKING THE PRE-EVALUATION ASSESSMENT AND A SUBSEQUENT
AVALUATION STUDY. DETAILED EXPLANATION OF WHAT IS INVOLVED IN
EACH STEP IS INCLUDED IN CHAPTERS 2, 3 AND 4.

B.1: STEPS IN THE PRE-EVALUATION ASSESSMENT

* CHECKLIST *

1. Define purpose of the evaluation:


. background to the evaluation

. audience

2. Define the nature, scope and objectives of the


program:
. nature of problem addressed by program

. program authority or mandate

. program objectives
. actual or planned resource use
. performance indicators

3. Analyse program logic:


. logical links between inputs, outputs, outcomes and objectives

4. Specify alternative ways of meeting program


objectives

5. Identify key evaluation issues:

. program rationale
. impacts and effects

. objectives achievement

. alternatives

44
6. Identify evaluation constraints:

. time, cost, expertise and credibility


7. Assess appropriate evaluation designs:

. management issues

. alternative evaluation methods

. data collection issues


. data analysis issues

. aggregating benefits and costs

8. Develop strategy for evaluation study:


. terms of reference

. preliminary work plan

. consider input from other agencies


. consider composition of steering committee and evaluation team

. prepare pre-evaluation assessment report


B.2: PURPOSE OF THE EVALUATION

* CHECKLIST FOR STEP 1 *

1. What are the objectives of the evaluation?


2. Who, or what event, initiated the evaluation? (eg routine evaluation cycle,
Auditor-General's report, ministerial request).
3. What is the stated reason for conducting the evaluation? (eg assist
development of program proposal, review of agency priorities).

4. What is the 'hidden agenda', if any? (eg defuse public controversy, answer
criticism, provide rationale for abolishing program).
5. Who is the primary audience for the evaluation report and what authority
does it have over program resourcing or management?

6. Which other key decision makers have a strong interest in the evaluation
and what influence do they have on program decisions?

7. Have the decision makers' needs and expectations been determined?


8. To what phase of the program development and implementation cycle will
the evaluation relate? (eg new policy proposal, review of existing
program, review of completed program).

9. What issues are of particular interest? (eg matters raised in Parliament or


by the Auditor-General; achievement of key program objectives; cost
effectiveness).

10. How important is each issue? (eg in terms of political impact, cost, or
scope for improved performance).

46
B.3: NATURE, SCOPE AND OBJECTIVES OF THE PROGRAM

* CHECKLIST FOR STEP 2 *

1. What is the mandate or authority for the program and are program activities
consistent with this?

2. What are the stated objectives of the program?


3. What were the catalysts which led to the development of the program? (eg\who
were the key proponents, what studies/inquiries recommended this approach?)

4. What key needs, gaps in services, problems are/were the program intended to
solve?
5. What results are/were expected from the program?

6. What reasons are/were there for believing that the program would be effective in
achieving these results?
7. Is there a clear and unambiguous definition of the target group at which the
program is aimed?
8. Have program implementation or other changes in the social/political
environment affected the relevance of the original program objectives or
introduced new objectives? (eg changes in demographic profile, strong popular
support for program, creation of perceived "rights" to a benefit.)

9. What would be the consequences if the new program were introduced (or an
existing one abolished)? Who would be affected? Who would complain and who
would be glad? Why?

10. What measures or criteria were identified at the program development and
implementation phase as appropriate output and outcome indicators?

11. Are these performance indicators still relevant?


12. In the light of program operation experience, are there other performance
indicators which are more relevant or which assist further in understanding the
success or otherwise of the program?

13. In respect of each performance indicator, were targets (standards, levels of


service) set; when; by whom; with what justification; and were they achieved?
B.4: ANALYSE THE PROGRAM LOGIC

*CHECKLIST FOR STEP 3*

1. Specify the ultimate objectives of the program.


2. Subdivide the operation of the program into a manageable number of major
activities or phases (between 5 and 10 segments is usually appropriate).
3. Specify intermediate objectives relevant to each phase/activity (there should be at
least one intermediate objective for each of the program's ultimate objectives).

4. Identify the inputs and intended outputs or outcomes of each of these major
phases.
5. Identify significant secondary outputs/outcomes (whether desirable or
undesirable).

6. Specify the perceived logical relationships (i.e. how a particular phase is


supposed to achieve the intermediate objectives), the implicit assumptions
underlying the relationship between the inputs, the outputs, the outcomes and the
intermediate or final objectives.
7. Confirm with the program managers and line managers that the model is a
realistic representation of what happens or, for a new program, is supposed to
happen.

8. Identify the evaluation questions which are of interest in respect to each phase
(these should directly address each assumption in point 6).
9. Specify performance indicators which can be used to monitor or answer the
evaluation questions in point 8.
10. Assess, in conjunction with program managers, what are the critical assumptions
and the corresponding key performance indicators.

48
B.5: IDENTIFY ALTERNATIVES

*CHECKLIST FOR STEP 4*

1. Review history of program and the reasons for the current evaluation.
2. Review reports, journals etc on approaches to the problem in question.

3. Use structured problem solving techniques (DELPHI, brainstorming etc) with


groups of professionals, clients etc.

4. Undertake screening of options.

5. Report briefly on discarded options.


6. Include selected options for analysis.
B.6: IDENTIFY KEY EVALUATION ISSUES

*CHECKLIST FOR STEP 5*

1. Program Rationale
- Does the Program make sense?

- Are the objectives still relevant?

2. Impacts and Effects

- What has happened as a result of the Program?


3. Objectives Achievement

- Has the Program achieved what was expected?

4. Alternatives
- Are there better ways of achieving the results?

50
B.7: IDENTIFY EVALUATION CONSTRAINTS

*CHECKLIST FOR STEP 6*

1. Time
2. Cost

3. Expertise

4. Credibility

5. Political and Social Environment


B.8: ASSESS APPROPRIATE EVALUATION DESIGNS

*CHECKLIST FOR STEP 7*

1. Specify those activities or changes (due to the program) which must be measured.
2. Identify sources of data.

3. Decide appropriate means of measuring changes due to program as distinct from


changes due to non-program factors.

4. Decide procedures for obtaining data (eg sample survey, automatic monitoring,
simulation, modelling).

5. Decide appropriate analytical approaches for analysing the data.


6. Decide how the results are to be aggregated and presented.

52
B.9: DEVELOP STRATEGY FOR EVALUATION STUDY

*CHECKLIST FOR STEP 8*

1. Prepare terms of reference or brief which includes clear and unambiguous


statement of the purpose and nature of the evaluation: (key issues to be
addressed, the specific questions to be answered, the audience, the timing and the
decision context).
2. Prepare preliminary work plan indicating

. how the purpose of the study is to be achieved;

. when each and all tasks are to be completed; and


. what evaluation products are required.

3. Provide a clear statement of procedures for review of progress and of any key
decision points.
4. Provide a clear statement of time and resource constraints, and of procedures for
amending these.
5. Consider input from other agencies, composition of steering committee and
evaluation team; identify official points of contact among program officials and
where appropriate, clients; and

6. Prepare an outline of procedures for amending the evaluation work plan should
this subsequently be required.
B.10: STEPS IN THE EVALUATION STUDY*

*CHECKLIST*

1. Get executive agreement on strategy, and in particular on the draft terms of


reference, recommended in the pre-evaluation assessment report.

2. Assign steering committee and study team; decide on the extent of involvement
of other agencies including central agencies, etc.

3. Prepare detailed work plan.

4. Prepare time, resources and methodology schedule for data collection, analysis
and reporting.
5. Undertake evaluation (eg collect data, test reliability, document and analyse data).

6. Communicate results.

(Refer also to questions suggested in Chapters 4 and 5.)

54
APPENDIX C: SUGGESTED OUTLINE FOR PRE-
EVALUATION ASSESSMENT REPORTS
1. An Executive Summary which includes:

- the objective of the study and the approach used;

- the key findings of the study; and


- the suggested strategy, including terms of reference, for the evaluation
study.

2. An Introduction which indicates:

- the questions addressed; and


- the approach used to conduct the pre-evaluation assessment and any major
constraints affecting the assessment.

3. A Program (Element) Description which describes:

- the background of the program element; and


- the program element's place in the overall portfolio program structure.

4. A Summary of the Analyses Conducted which includes:

- information on whether the program operates as intended;


- an assessment of the degree to which the activities of the program are
plausibly linked to the attainment of its desired results;

- a summary of the major approaches used in the previous evaluations of


this or similar programs;

- a list of specific questions which could be answered in the evaluation


study; and

- a presentation of the evaluation approaches that could be used to answer


each evaluation question.

5. Possible Evaluation Designs indicating:

- a set of specific questions which should be addressed in the evaluation


study;

- the related evaluation designs which could be used and the reasons for
their selection, including the identification of the evaluation indicators and
methodologies;
- the confidence with which each question could be answered; and

- the time and resource requirements.

6. Strategy for the Evaluation Study including:

- draft terms of reference (i.e., recommended scope of study and key


questions to be addressed);
- preliminary work plan

- recommended evaluation design;

- recommended process for undertaking evaluation (eg in-house, consultant)

56
APPENDIX D: SUGGESTED OUTLINE FOR EVALUATION
REPORTS
1. Table of Contents

- list of chapter and section headings

- lists of figures and appendices

2. Executive Summary

- a brief statement of evaluation objectives and methods

- a summary of major findings and conclusions


- recommendations and matters needing further consideration

3. Introduction

- terms of reference for the study


- identification of constraints on the|~ study

- statements of key assumptions and values underlying the report

4. The Substance of the Report

(A) Program (Element) Description


- a statement of the mandate and key objectives of the program

- an exposition of the logic of the program


- definition of key concepts

(B) Summary of Analyses Conducted

- justification for indicators selected in terms of major evaluation issues to


be addressed

- description of data collection procedures and measurement devices


together with indications of reliability

- outline of collection results

5. Findings and Conclusions

- results of analysis related to program (element) objectives


- findings from other relevant sources

- overall findings and discrepancies between these and program (element)


objectives

- conclusions organised in terms of major evaluation study issues

6. Recommendations

- recommendations set out to show derivation from findings and


conclusions

- alternative options considered and reasons for rejection

- any matters recommended for further study and estimates of the resources
needed for this.

7. Resource Issues

- program (element) costs together with costs for implementing all or parts
of the recommendations

- offsetting savings

8. Appendices

- detailed documentation of data collection and analysis procedures


- list of references

- list of staff/organisations consulted during the study


- list of steering committee and study team members

58
BIBLIOGRAPHY
Anderson, J.E. (1979): Public Policy Making , 2nd Ed

(New York: Holt, Rinehart & Winston)


Auditor General of Canada (1983): Government Wide Audit on Program Evaluation

Baugher, D. (1981): "Developing a successful measurement program", in D. Baugher


(ed), New Directions for Program Evaluation: Measuring Effectiveness . (San
Francisco. Jossey-Bass Pub.)

Bogden, R. & S.J. Taylor (1975): Introduction to Qualitative Research Methods . (New
York: John\Wiley & Sons)
Brook, J. & G.C. Arnold eds (1985): The Fascination of

Statistics. (New York: Marcel Dekker, Inc).

Cook, T.D. & D.T. Campbell (1979): "Quasi-Experimentation - Design and Analysis for
Field Settings" . (Chicago: Rand\McNally)
Dobell, R. & Zussman, D. (1981): "An evaluation system for government: If politics is
theatre then evaluation is (mostly) art", Canadian Public Administration , Vol 24/3,
pp\404-427.
Evaluation Research Society (1980): Standards for Program Evaluation .

G.A.O (1976): Evaluation and Analysis to Support Decision Making . United States
General Accounting Office. PAD-76-9.

G.A.O. (1978): Assessing Social Program Impact Evaluation - A Checklist Approach .


United States General Accounting Office. PAD-79-2.
G.A.O. (1980): Evaluating a Performance Measurement System - A Guide for the
Congress and Federal Agencies . United States General Accounting Office. FGMSD -
80-57.

G.A.O. (1984): Designing Evaluations . United States General Accounting Office,


Methodology Transfer Paper\4.

Gross, P. and Smith, R.D. (1984): Systems Analysis and Design for Management .
(New York: Dun-Dunnelley Pub Coy).
Hargrove, F.C. (1980): "The bureaucratic politics of evaluation". Public Administration
Review , Vol 40, pp\151-159.
________________________________________________________________________
________________________________________________________________________
________________ .... .... .... .... ......... ........................
67

Hatrey, H. et al (1976): Program Analysis for State and Local Governments .


(Washington: The Urban Institute).

Hatrey, H. et al (1983): Practical Program Evaluation for State and Local Governments .
(Washington: The Urban Institute).

Haveman, R. (1976): "Policy analysis and the congress: an economist's view", Policy
Analysis , Vol 2/2.

Imboden, N. (1978): A Management Approach to Project Appraisal and Evaluation.


(Paris: OECD)

Joint Committee on Standards for Educational Evaluation (1981): Standards for


Evaluations of Educational Programs Projects and Materials . (New York: McGraw
Hill).
Judd, C.M. & D.A. Kenney (1980): Estimating the Effects of Social Interventions.
(Cambridge, UK: Cambridge Uni Press)
Kennedy, J.A. (1980): A Proposal Towards A Method Evaluating the Effectiveness of
P&MD's Programs . Productivity Development Division, Commonwealth Department of
Productivity.
Keppel, G. (1982): Design and Analysis - A Researcher's Handbook. (Englewood
Cliffs: Prentice Hall)
Kidder, L.H. (1981): Research Methods in Social Relations. (New York: Holt,
Rinehart and Winston)
Krishnaiah, P.R and Kamal, L.M. (eds) (1982): Handbook of Statistics 2 -
Classification, Pattern Recognition and Reduction of Dimensionability . North-Holland
Publishing Copy. Amsterdam.

Larson, R.C. & Berliner, L. (1983): "On evaluating evaluations". Policy Sciences ,
Vol\16, pp 147-163.

Lauth, T.P. (1985): "Performance evaluation in the Georgia Budgetary process". Public
Budgeting & Finance , Spring 1985.

McCleary, R. & R.A. Hay (1980): Applied Time Series Analysis for the Social
Sciences. (Beverly Hills, Calif.: Sage)

Neigher, W.D. & Schulberg, H.C. (1982): "Evaluating the outcomes of human services
programs: a reassessment." Evaluation Review , Vol\6/6, pp\731-752.

NSW Public Accounts Committee (1985): "Report on performance review practices in


government departments and authorities." (Report No\15 of the Public Accounts
Committee of New South Wales).

60

You might also like