You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/23540652

Planning and plan implementation: notes on evaluation criteria

Article  in  Environment and Planning B Planning and Design · February 1989


DOI: 10.1068/b160127 · Source: RePEc

CITATIONS READS

217 15,197

2 authors, including:

Andreas Faludi

226 PUBLICATIONS   6,842 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

The poverty of territorialism: Neo-medieval spatial planning View project

All content following this page was uploaded by Andreas Faludi on 10 May 2016.

The user has requested enhancement of the downloaded file.


Environment and Planning B: Planning and Design, 1989, volume 16, pages 127-140

Planning and plan implementation: notes on evaluation


criteria

E R Alexander^
Department of Urban Planning, University of Wisconsin-Milwaukee, Milwaukee, W l 53201, USA
A Faludi
Planologisch en Demografisch Instituut, Universiteit van Amsterdam, 1011 NH Amsterdam, The
Netherlands
Received 15 November 1988

Abstract. This paper concerns the distinction betvs'een 'good' and 'bad' planning. Three views
of the planning process are distinguished, with their associated criteria of the quality of plans:
planning as control of the future, implying that plans not implemented indicate failure; planning
as a process of decisionmaking under conditions of uncertainty, where implementation ceases
to be a criterion of success, but where it becomes difficult, therefore, to give stringent criteria
of the quality of a plan; and a view holding the middle ground, where implementation is still
important but where, as long as outcomes are beneficial, departures f r o m plans are viewed
with equanimity. Similar distinctions are drawn i n the implementation hterature and i n the
literature on programme evaluation. The authors seek to develop a rigorous approach to
evaluation under conditions of uncertainty. For this purpose, the authors draw on the policy-
plan/programme-implementation-process (PPIP) model developed by Alexander and give five
criteria for comprehensive evaluation: conformity, rational process, optimahty ex ante, optimahty
ex post, and utilisation. The procedure is outlined in considerable detail, by means of tables
and flowcharts. The framework confronts the dilemma that, although policy and planning must
face uncertainty, we must at the same time be able to judge policies, plans, and their effects.

A question which must naturally interest us as planners is: What is 'good' or 'bad'
planning? This is of course intimately linlced to another issue which has been the
subject of some discussion over the years: What is planning? I n some views
perhaps both these questions are trivial; after all, Vicicers (1968) simply said
"planning is what plarmers do", and evaluating the effectiveness of planning may be
just as obvious.
This paper is presented on the premise that the answers to these questions are
neither obvious nor simple, and that they have important implications for how we
view and practise planning. We suggest that ideas on what planning is and how it
should be evaluated are changing. Estabhshed views are fading, and alternative
models of the planning process are proposed as replacements.
The conventional plaiming model also impUed a set of criteria for 'good' and
'bad' planning. These criteria had, and probably still have, a strong influence on
how planning and planning efforts are regarded both by planning practitioners and
by others. We will suggest that perhaps these criteria were never realistic to begin
with, and that other criteria should replace them. A t the same time, if planning is
to have any credibility as a discipline or a profession, evaluation criteria must enat)le a
real judgment of planning effectiveness: good planning must be distinguishable
f r o m bad.
First, the relationship will be explored between different definitions of planning
that have been proposed, and various perspectives on plaiming evaluation. Next,
we will discuss the Unk between planning and plan evaluation and implementation
H This paper is based on discussions held while Professor Alexander was a visiting professor
at the Institute of Planning and Demography of the University of Amsterdam in the fall term
of 1987.
128 E R Alexander, A Faludi

assessment, which has also been the subject of a growing Hterature. Last, we will
suggest some criteria for evaluating plaiming processes, plans, and their outcomes,
criteria that respond to the shortcomings i n previous evaluation approaches.

Planning deflnitions and evaluation


In his paper: " I f planning is everything, maybe it's nothing", WUdavsky (1973)
showed the link between the definition and evaluation of planning. He defined
planning as control of the future, and suggested that, since uncertainty makes control
of the future impossible, the question 'what is good planning?' is unanswerable.
If Wildavsky's premises are accepted, his conclusion is irrefutable: planning carmot
be evaluated, and is, in essence, an act of faith.
One of Wildavsky's implied axioms is, indeed, incontrovertible. A human
activity, if it is not undertaken solely as a symbolic ritual, must be capable of
evaluation, so that practice can learn the lessons of experience, and success can be
distinguished from failure. The link between planning and action, and between
plans, implementation, and results, is today universally accepted.
But defining planning as control of the future implies that planning is not
successful if there is anything less than total conformity. Less extreme definitions
of planning than Wildavsky's have been proposed, which would make evaluation
possible without making demands that are impossible to nleet.
Responding to Wildavsky, Alexander (1981) suggested that planning is not
everything. He defined planning as the societal activity of developing optimal
strategies to attain desired goals, Unked to the intention and power to implement.
This definition limits plarming and excludes many areas of important social and
individual activity. A t the same time, i t suggests some criteria for evaluating plans
and planning processes.
These criteria are still focused on implementation, but they link the quality of
planning and plans to the optimahty of the strategies that were devised. I n this
view, a plan that was implemented, and where expected positive outcomes
significantly outweigh unanticipated undesired effects, is effective. I n retrospect, it
seems easy to judge such a strategy as successful.
Unfortunately, evaluation is rarely that simple. What of planning efforts which
were not, or were only partly, implemented? A r e these total failures? I n the
Wildavsky view they would be. But our evaluation begins to be more complex.
Rationahty, as a model for justifying decisions (Faludi, 1986a, page 84), becomes
an important evaluative criterion, in addition to outcomes as compared with
intentions. Here, rationality means the superiority of a proposed course of action
over its alternatives. Demonstrating rationahty may require analyses, predictions,
and evaluations in support of the proposals.
D i d the planning process conform to the requirements of rational decisionmaking?
Given the information available to planners and decisionmakers at the time, could
the chosen strategies reasonably be judged to be feasible and optimal? These are
not easy questions to answer.
Answering these questions demands an ex post reconstruction of the decision-
makers' ex ante perception of their situation. Such reconstructions are difficult—
but not impossible—exercises i n historical interpretation. Like the historian, the
analyst has to stick rigorously to what the decisionmakers knew about the situation,
and to their motives and the context of their actions.
But if the scores on the rationality test are positive, then any shortfalls between
plan and reality cannot be attributable to the planners or plans, unless planning is
expected to be superhuman. Rather, planning failure here must be the result of
changes that could not be anticipated.
Planning and plan implementation 129

This is very important, because such changes are intrinsic to our human and
social condition: the power of anticipation is limited by uncertainty. Uncertainties
include uncertainty about the decision environment: what are future trends going
to be?; uncertainty about goals: for what values (our own and those of future
'consumers' of our plans' results) should we plan?; and uncertainty about related
areas of choice: what decisions and choices are going to be made i n areas related
to the subject of current policy or planning efforts, for example, national economic
pohcy, pending enviromnental legislation, etc? (Friend and Jessop, 1977, pages 8 8 - 8 9 ;
Hall, 1980, pages 4 - 1 1 ) .
Uncertainty is a central element i n another definition of planning that has
recently been proposed. Faludi's "decision-centred" view of planning (1987,
pages 116-137) abandons the direct link to action that has been suggested by
observers of the planning process (Friedmann, 1969; 1987, pages 4 4 - 4 6 ; Gross,
1971). Instead, he defines plaiming as a process of creating a frame of reference
for operational decisions: those decisions which represent the commitment to
action by the decisionmaking agent or through which the decision agent deploys
other organisations or units i n planning or implementation activities.
Faludi breaks this hnk not to deprecate the importance of action. On the
contrary, decisions on action to be taken here and now are so important that
decisionmakers cannot be overconcerned with following some plan. Plans are only
there to be helpful, when some form of advance structuring of decision situations
is needed. But the structuring devices are secondary i n importance. What are of
primary importance are decisions.*^'
It is for this reason that flexibility is incorporated into the decision-centred view
of planning from the start. I n this view, change in decision situations is likely
between planning and operational decisionmaking, so nonconformity of outcomes
or nonimplementation of plans are not necessarily failures. I f plans were used i n
operational decisionmaking, then they served their purpose, even i f operational
decisions and their outcomes prove to be quite different f r o m those prescribed.
This approach sees plans as prior investments which help to improve the
operational decisionmakers' grasp of their situation. As long as decisionmakers
avail themselves of plans, the plans f u l f i l their purpose. So, to come to a positive
conclusion about a plan, i t is not necessary for it to be foUowed strictly; indeed, it
need not be followed at all. A U that is required i n this view for the plan to be
effective is that it be used.
I n overview, we can recognise three different approaches to uncertainty, each
conforming to one of the three above definitions of planning. Wildavsky's planning
is a 'straw man' who has to ehminate uncertainty if he is to be conceded the right to
exist. Alexander's definition recognises uncertainty, which planned strategies have to
incorporate if they are to be effective, and which plan evaluation must take into
account in assessing implementation. Faludi's definition embraces uncertainty, to
the extent that the link between planning and outcomes is broken, and implementation
conformity becomes ultimately irrelevant to the evaluation of planning.
Arraying the three definitions on a continuum, we find Wildavsky at one pole
where plans not implemented always indicate failure, and Faludi at the other where
implementation ceases to be a criterion of success. Alexander holds the middle
ground where implementation is still important but where, as long as outcomes are
beneficial, departures f r o m plans are viewed with equanimity.

<i) This is even true where plans carry legal force: if they do not fit the exigencies of the
operational decision, they are ignored as a matter of course.
130 E R Alexander, A Faludi

Implementation and plan evaluation


Related fields may offer some illuminating parallels to the evolution in views of
plaiming that has been reviewed above. These are the study and assessment of
implementation and programme evaluation. Observers and analysts of implementation
recognised the Importance of uncertainty f r o m the beginning, for example Pressman
and Wildavsky's (1973) classic study of the Oakland Office of Economic Opportunity
project, which can almost be said to have launched this field.
Nevertheless, the approaches to implementation and implementation assessment
that developed in the early to mid-1970s can be characterised as 'Hnear' (Alexander,
1985, pages 4 0 7 - 4 0 8 ; Faludi, 1987) or 'top-down' (Sabatier, 1986). The authors
concerned assume that pohcies or plans are complete at a given point in time, and
evaluate implementation by the degree to which outcomes conform to pohcy. This
approach is best presented i n the work of Mazmanian and Sabatier (1981; 1983).
Subsequent views of the policy-implementation process modified this approach
considerably, and saw the transformation of ideas into action as much more
interactive. The process was variously described as 'circular', 'reflexive', or, finaUy,
as a 'negotiative process' (Alexander, 1985, pages 4 0 8 - 4 0 9 ) . Clearly, such views
have imphcations for implementation assessment: evaluation can no longer simply
compare the conformity of the outcomes with the pohcy or plan. Instead,
implementation itself becomes the object of evaluation.
Approaches to programme evaluation went through a similar transformation.
Originally, programme evaluation was presented as an objective—almost s c i e n t i f i c -
undertaking. A programme's success could be ascertained by measuring its impacts,
using one of a variety of more or less rigorous experimental designs (Campbell and
Stanley, 1966; Hatry et al, 1973; Tripodi et al, 1971). Gradually, the objectivity
of programme evaluation came under question (House, 1980; Weiss, 1972; Wholey,
1979), and recognition of the 'politics' of evaluation became the order of the day
(Williams, 1975; Weiss, 1978).
As a result, accepted styles of programme evaluation changed. Programmes
were no longer expected to deliver outputs, or to generate recognisable impacts.
Instead, programmes also became the objects of 'process'-type evaluations, i n which
the delivery of the programme itself rather than its product became the focus of
attention (Alterman et al, 1984; Madsen, 1983; Patton, 1980; Rutman, 1977). I n
these evaluations, relevant criteria were no longer only the presence of positive
impacts that had been planned and were attributable to the programme intervention,
but rather process characteristics such as chent involvement, organisational inter-
action, or sense of accomplishment.
Thus, evaluation of planning and plans, implementation assessment, and
programme evaluation have evolved through the last two decades in ways which
reflect a common problem. A U began with models that impUed a relatively
determinate relationship between intention and outcome, where accomplishment
was measured by assessing conformity between pohcies, plans, and programme
objectives, and actual outcomes and impacts. A l l have substituted a consciousness
of process for that preoccupation with product, and have recognised the falUbiUty of
understanding, the ubiquity of uncertainty, and the socially constructed nature of
'objective' knowledge.
The problem is now: how can we evaluate? How can we distinguish between
success and faüure, between effective planning and incompetent or misguided
efforts? This problem must be confronted, if we are not to succumb in a sea of 1
/
!

This summary review focuses on the US scene; for a valuable international comparison,
see Levine et al (1981).
Planning and plan implementation 131

relativism which makes us vulnerable to our harshest critics, who should be, and
often are, ourselves. Evaluation, in each of these fields, is a challenge that must be
met so that learning can be possible. Learning from experience can only be
accumulated and transformed into knowledge through systematic evaluation,
generalisation, and development of new theories and norms for practice.

Evaluating plans and planning


Unquestionably the evolution described above represents progress: from simplicity
to complexity. The decision-centred view of planning, like the interactive or
negotiative model of implementation, and the process-oriented approach to
programme evaluation, are improvements on their predecessors, because they are
more reaUstic and incorporate uncertainty and change as facts of life.
But here we want to address the problem that this complexity raises: the fact
that it has apparently become impossible, if we take these approaches at their face
value, to undertake any evaluation. This warrants some explanation.
According to Popper (1959), in empirical enquiry, only falsifiable propositions
should be called scientific. This can be applied to evaluation of planning or plans
as well, in the sense that the objects of the assessment must be able to fail any of
the tests involved. A t the end of the day, it must be possible to give a 'thumbs
down' and, furthermore, to convey the reasons for one's negative judgment to
others. In other words, evaluation is unworthy of that name unless there are
criteria for the evaluator to recognise the 'good' and distinguish it from the 'bad'.
Thus, using a term borrowed from Popper and his school, we may say that plans
are fallible, and that evaluation must relate to their fallibility.
Of the three modes discussed, the decision-centred model seems to be particularly
vulnerable to this type of critique. This is because it deliberately breaks the link
between plans and outcomes 'on the ground'. I f this were to mean that all planning
is 'good' planning, how could planning be evaluated? A n d if it cannot be evaluated,
how can any claim be made for planning as an activity that contributes something
to society and humankind? For a view of planning which again makes the
evaluation of planning, plans, and plan outcomes possible, we must see planning in
its larger context: planning as part of the social deliberative and interactive process
which links aims to action, and which transforms ideas into realities.
This process is recursive (Mack, 1973, pages 135 -139), and essentially hierarchical
in its progression from broad abstraction to concrete and case-specific reality.
This is not to suggest that the flow through the stages is necessarily top-down:
depending on the stimulus and context, it can be bottom-up (Elmore, 1979/80), or
begin at any intermediate level, as shown in figure 1 (see over). This process has
been called the PPIP: 'policy-plan/programme-implementation process' (Alexander,
1985).
The PPIP model offers a view of plaiming that allows us to integrate policy,
planning, projects, and programmes, operational decisions, implementation and
implementation decisions, and the outputs, outcomes, and impacts of plans and
their implementation. First we must define all these and relate them to one another;
this is done below, with the relationships between these elements shown in table 1.

Postuma (1987) shows that this is not necessarily so i n his evaluation of the 1935 General
Extension Plan of Amsterdam. This plan provided a framework for housing-related decisions
until long after World War 2, but with respect to port developments it failed to give meaningful
guidance. Thus, a plan, or parts of it, can be shown not to have worked.
132 E R Alexander, A Faludi

A policy or a plan can be defined as a set of "instructions ... that spell out both
goals and the means for achieving those goals" (Nalcamura and SmaUwood, 1980,
page 31). Pohcies and plans may be distinguishable f r o m one another by their
respective scope and range, and their relative degrees of abstraction or concretness
and specificity.'"'
Programmes and projects are specific interventions to achieve defined objectives,
discrete 'chunks' of solutions, as it were, to specific problems (Wildavsky, 1979,
pages 391-393). The programme delivers services or initiates some course of
action, such as regulation, reorganisation, etc.'^'
The project produces a concrete product: a facihty, construction, infrastructure,
etc. A useful distinction is between 'strategic projects', that is, projects undertaken
by higher level authorities as part of their broad mandate (for example, facihties or
infrastructure of national or regional importance such as airports, harbours, or
major highways) and other projects implemented by local jurisdictions and the
private sector (Faludi, 1986b, page 260).
Operational decisions are those decisions made in the context of the deliberative
process that commit the decision agent to action. Reversal of an operational
decision entails costs. Operational decisions can be likened to output. I n a
marmer of speaking, they are whatever leaves the plaiming agency i n terms of
stated intentions, persuasive statements, etc. Operational decisions need not,
however, be implementation decisions; they can also be decisions affecting lower-
level or other agencies or organisations: regulatory approvals, funding allocations,
etc. But they are distinct, i n their association with commitment, f r o m planning
decisions. Plans reflect commitments that are easily suspended or reversed by
merely substituting one form of words for another (Faludi, 1987, pages 1 1 6 - 1 1 7 ) .
Implementation and implementation decisions here refer to action and operations
in the field. Indeed, i f we adopt current perspectives on implementation, the
division between pohcy, planning, and implementation is fuzzy, and the definition
of implementation will vary relative to the level of organisation or government
concerned (Alexander, 1985, pages 409-410).

Stimulus Policy Plan


Implementation
Programme

Link 3

ei '
(stop)
3

—o
Figure 1. The pohcy-plan/programme-implementation process.

Key terms need to be defined for the purposes of discussion because they are sometimes
used i n different senses (compare Williams, 1976, pages 2 7 2 - 2 7 3 ) . There are other usages,
like that of Friend and Jessop (1977, page 111), which define pohcies as forms of expression
to be used within plans—the other forms being programmatic statements.
Again, other usages of these terms exist (for example, see Friend and Jessop, 1977).
Williams (1976) defines a programme as a cluster of activities (by imphcation, with spatial
extension, for example, nationwide) and a project as a single activity within such a cluster.
Planning and plan implennentation 133

But viewing the PPIP as a whole we can distinguish implementation as action


and operations in the field designed to achieve change 'on the ground'. Implementation
decisions are a special class of operational decisions, therefore: those decisions
which produce the final outputs of a programme or a project, and which impact
directly upon the client, the organisational, or physical environment. Such decisions
include the apphcation of regulations, disbursement of funds, contracting and
procurement, personnel actions and management, service dehvery, etc (see table 1).
I n table 1 we see all these elements of the process which transforms ideas into
realities arrayed on the dimension which relates to their essential difference: the
degree of abstraction and generality, or concreteness and particularity. We can
now review the alternative evaluation criteria for planning which have evolved as a
result of the alternative definitions that have been discussed above.
Three distinct evaluation approaches can be identified. Traditional or conventional
'objective' assessment of policy or planning effectiveness, success in implementation,
and programme accomplishment, ignores uncertainty, as we have seen. I t demands

Table 1. The policy-plan/programme-implementation process: elements and relationships.

Abstract Concrete
General Particular
Broad Specific

Deliberative Pohcy Plan(s) Programme(s)


process Project(s)

Agent: government(s), organisations, institutions, agencies


Decisions operational decisions: operational decisions: operational or
elaborating or elaborating or implementation
implementing policy implementing plans decisions and
actions
Agent: agencies implementing policy, plans, or programmes and projects
Actions and outputs plan(s) programme(s) legislation
programme(s) strategic and other personnel actions
(strategic) projects projects contracts and
procurement
resource allocations
disbursements, etc
Results and impacts change(s) implementation construction and
elaboration development
development projects
service dehvery
(service
programmes)
administrative action
(managerial and
reorganisational
programmes)
apphcation of
legislation and
regulations
Object policy plan(s) physical, built, and
plan(s) programme(s) socioeconomic
strategic project(s) project(s) environment
other organisations,
agencies, firms,
households,
individuals
134 E R Alexander, A Faludi

conformity of operational decisions, implementation processes, and concrete results


with the intentions expressed in policies and plans.
'Subjective' evaluation takes uncertainty into account. Uncertainty is incorporated
by evaluating the planning process and assessing the optimahty of the resulting
strategies. This must be done in the light of the actual planners' or decisionmakers'
ex ante knowledge and information and their perceived and actual constraints.
This is different f r o m 'objective' assessment. So, we may expect some plans to fail
one test, but pass the other. We may conclude that the tests are complementary.
'Decision-centered' evaluation examines the use of the policy or plan as a frame
of reference for operational decisions. Acceptance of uncertainty is integral to this
evaluation approach: changes in the perceived decision situation (which is the
cognitive context in which pohcies and plans are developed and operational decisions
are taken) are a sufficient reason for nonconformity between operational decisions
and their frame of reference, c*'
Each of these approaches has its strengths and weaknesses. 'Objective' evaluation
has the advantage of being concrete and intuitively acceptable. Its weakness is its
failure to allow for unavoidable and irreducible uncertainty: thus such evaluations
(and they are common) have made demands for performance which have been
impossible to fulfill.
'Subjective' evaluation has the advantage of allowing for uncertainty, and allowing
for plarmers', progranmie designers' and implementors' fallibility by making
judgments based on their perceived decision situations. Including rationality and
optimahty criteria still enables positive and negative evaluations to be made.
This approach meets the requirement, set out previously, that the test must be
constructed so that a plan could fail it. Its weaknesses are its complexity and the
difficulty of reconstructing the ex ante decision situation. But neither difficulty is
uncommon in social research.
The strengths of 'decision-centred' evaluation are i n its logical consistency.
Embracing uncertainty, it absolves pohcy and plaiming f r o m responsibihty for
subsequent operational decisions and implementation. But this is also its apparent
weakness: i n severing the link between policies, plans, and outcomes, it seems to
have lost the essential ingredient that any evaluation must have—that outcomes
could be negative as weU as positive. This concern, however, is alleviated i f we
specify the conditions under which we would regard a plan as useful to operational
decisionmakers.

A proposed framework for policy-plan-implementation evaluation


Here a framework for evaluating pohcy and plan-implementation is presented.
This combines the three evaluation approaches which we see as, in effect,
complementary. The framework hsts criteria in a programmed sequence of
questions to be applied to the pohcy, plan, or planning process under consideration,
as weU as to its outcomes. Depending on the responses to this sequence, evaluation
can be positive, neutral, or negative.

By an extension of the same logic, decision-centred plans must deal with contingencies, i f
in no other way than by allowing for future adaptations (Faludi, 1987).
This approach has been applied empirically by Postuma (1987). A n elaboration specifies
four conditions which i n part foreshadow the complementary evaluation proposed below:
(1) conformity with reference to the plan; (2) deliberate (that is, reasoned) departure f r o m
the plan; (3) reference to the plan in analysing the consequences of nonconforming operational
decisions; (4) regenerative capacity of the plan, that is, systematic review and amendment
using the plan as frame of reference (Wallagh, 1988, pages 122-123).
Planning and plan implementation 135

This evaluation framework sequentially applies criteria from each of the three
evaluation approaches discussed above.
(1) Conformity This intuitive question is taken over from the conventional
evaluation approach. I t asks: "To what degree do operational decisions,
implementation decisions, and actual outputs, outcomes, and impacts conform to
the goals, objectives, intentions, and instructions expressed in the policy, plan, or
programme being evaluated?" This test concerns two questions, therefore: (a) Was
the plan foUowed, or is it being implemented? (b) A r e its effects as desired?
But, unlike i n the conventional evaluation approach, conformity is not the sole
criterion of success. Implementation or results of policies or plans which do not
conform, i n some degree or other, do not automatically elicit a negative evaluation
of the pohcies or plans 'responsible'. Rather, additional criteria are sequentially
applied.
To the degree that conformity exists, the policy, plan, or programme has met
one condition for a positive evaluation. Other conditions involve additional criteria
which are presented below.
(2) Rational process A rational approach to the planning and decisionmaking
process is another criterion that is applied, whether or not operational decisions
and outcomes are found to be conforming to plan or pohcy requirements. A
rational process here means conforming to certain normative requirements in
process and method. These essentiaUy consist of the following general conditions
(the more specific ones associated with formal rationality i n a narrower sense of
the word are discussed below under ex ante optimahty):
(a) Completeness Reasonable acquisition and use of available knowledge and
information, and the 'design' [search for, or development of, options (Alexander,
1982)] and evaluation of alternative courses of action; applying this requirement
means an assessment of the ex ante decision situation.
(b) Consistency Logical consistency i n the data, methods used in their analysis and
synthesis, and strategies presented in the conclusions and recommendations;
adoption and implementation of recommended strategy; examination of policy or
plan documents can iUuminate the consistency of policies and plans.
(c) Participation Involvement i n policy or plan development of relevant affected
parties, and their participation in critical decisions; the values reflected in the
goals and objectives of a policy or plan must be a weighted aggregation of these
interests. This criterion reflects the aspiration toward uninhibited cormnunication
and consensus of critical rationality (Habermas, 1984). Legislative, policy, and plan
documents, and interpretive reconstruction of the planning process may be necessary
to assess the degree to which this requirement has been met, and at best this
remains an essentially ideological, pohtical, or subjective evaluation.
(3) Optimality ex ante, or rationality in the narrow sense Could the strategy or the
courses of action prescribed in the pohcy or plan under assessment be considered
optimal? Determining optimality involves assessing relationships between aims and
means. When this happens ex ante, obviously we are talking about such relation-
ships as perceived by the decisionmakers in the course of taking their decisions.
(4) Optimality ex post Was the strategy or were the courses of action prescribed
in the policy or plan under assessment i n fact optimal? As against the evaluation
of the plan under (2) and (3) above, this is ex post assessment of the goals and
objectives of the undertaking that has been implemented. I t also goes beyond the
test proposed under (1) above, where one question was whether the effects were
the ones the plan aimed for. But, even i f they were, with hindsight i t is possible to
conclude that these effects were not, i n fact, optimal; this is why a separate
evaluation is necessary.
136 E R Alexander, A Faludi

Table 2. Evaluation questions.

Criterion and question Conditional response and/or evaluation

1 Conformity
1.1 Do policy-plan-programme-project If yes, go to 1.1.1
(PPPP) outcomes or impacts conform If no, go to 2
to PPPP instructions or projections?
1.1.1 Is conformity complete or partial? Ji complete, go to 1.2
If partial, go to 1.1.2
1.1.2 Is degree of partial conformity If yes, go to 1.2
significant i n terms of impact on If no, go to 1.1.3
the relevant (socioeconomic,
physical, built) environment?
1.1.3 Is partial conformity so limited as to If yes, PPPP rates negative; go to 2
be almost neghgible? If no, disaggregate pohcy or plan evaluation
into more conforming and less conforming
parts and go to start f o r each separately
1.2 Does PPPP have a significant directive If yes, PPPP rates positive; assume that PPPP
function (that is, is i t more than a has been used; but it can stiU be evaluated
projection of practices, procedures, for rationahty and optimahty; go to 3
or trends that would have occurred
without the respective PPPP, and is If no, PPPP rates negative, i n spite of
it more than a collage of other conformity due to absence of directive
PPPPs)? function
2 Utilisation Since response to 1 indicates nonconformance,
explore reasons for nonconformance with
utilisation or nonutihsation; go to 2.1
2.1 Was the PPPP used or consulted i n If no, go to 2.2
making operational decisions If yes, PPPP rates positive, but may still be
involved i n the development or assessed for rationality and optimality;
implementation of this or other go to 3
PPPPs?
2.2 What was (were) reason(s) for non-
conformance or nonutihsation?
2.2.1 Change i n decisionmakers? If yes, go to 2.2.2
If no, go to 2.3
2.2.2 Could this change have been Ti yes, PPPP rates negative, but may still be
anticipated, or could the PPPP have assessed for rationality and optimality ex
incorporated flexibility or adapt- ante; go to 3
abUity to respond to such a change? If no, go to 2.3
2.3 Change in decision situation?
2.3.1 Caused by
(a) objective changes i n environment, If yes, go to 2.3.2
phenomena, trends? If no, PPPP rates negative but may still be
(b) perceived changes i n environment, assessed f o r rationahty and optimality
phenomena, trends? ex ante (go to 3); reasons f o r non-
(c) changes in societal or organisational utilisation i n absence of change may be
values, goals, objective? found i n these assessments
(d) changes i n available means,
resources, strategies, technologies?
2.3.2 Could the change(s) in the decision If yes, PPPP rates negative, but may still be
situation have been anticipated or assessed f o r rationality and optimality;
allowed for i n the PPPP (for go to 3
example, through prediction, If no, PPPP rates neutral; go to 3
flexibility, adaptabihty, potential
for revisions, etc)?
Planning and plan implementation 137

Table 2 (continued).

Criterion and question Conditional response and/or evaluation

3 Rationality PPPP can always be evaluated for rationality;


go to 3.1
3.1 Consistency: are the provisions of If yes, go to 3.2
the PPPP internally logical, If no, PPPP rates negative, but may stiU be
compatible and consistent with its evaluated for information and participation;
goals, objectives, premises, and go to 3.2
analysis?
3.2 Information: does the PPPP incorporate If yes, go to 3.3
and use the best data, technology, If no, PPPP rates negative, but may still be
information, methods, and evaluated for participation; go to 3.3
procedures that were available in the
context and at the time of the PPPP's
preparation and development?
3.3 Participation: did all relevant groups, If yes, go to 4
interests, organisations, institutions, If no, PPPP rates negative, but may still be
social units, and individuals evaluated for optimahty; go to 4
participate i n the preparation of the (Note: negative responses to these questions,
PPPP and in making critical when questions 2.2 or 2.3 received
decisions? Do these decisions and negative responses too, may offer reasons
the PPPP i n general reflect the for nonconformity to or nonutihsation of
weighted aggregate of affected the PPPP)
groups?
4 Optimality ex ante PPPP can always be evaluated for optimality
ex ante; go to 4.1
4.1 Was the recommended or adopted If yes, PPPP rates positive; go to 5
strategy or course of action in the If no, go to 4.2
PPPP optimal (that is, the 'best') in
the light of the decision situation
prevailing at the time of the PPPP's
preparation and development?
4.2 D i d the PPPP rate positive on the If yes, go to 3 and reassess
rationality criterion? If no, PPPP rates negative; go to 5
5 Optimality ex post PPPP can always be evaluated for optimality
ex post; go to 5.1
5.1 Was the recommended or adopted If yes, go to 5.2
strategy or course of action in the If no, PPPP is rated neutral; failure is not
PPPP optimal (that is, the 'best') i n due to PPPP but to different values,
the hght of present analysis: options, constraints, impacts, etc,
perceived values, goals, objectives, recognised in hindsight
options, constraints and observed
outcomes, impacts, and
unanticipated consequences?
5.2 D i d the PPPP rate positive on the test If yes, PPPP rates positive
of optimality ex ante? If no, then this is a freak result which may
be caused by post-PPPP value changes or
unintended or unanticipated positive
effects; assess for possible implications
for future
5.3 D i d PPPP rate positive on the If yes, PPPP rates positive
rationality criterion? If no, go to 3 and reassess
138 E R Alexander, A Faludi

This test is not easy, and may involve a considerable degree of subjectivity as
attested to in much evaluation literature and many examples (Mazmanian and
Sabatier, 1983, pages 9 - 1 1 ; Weiss, 1972, pages 6 - 1 2 ; Williams, 1975). On the
other hand, a simple assessment on the basis of implementation conformity and
internal rationality alone risks verdicts such as: "The operation succeeded but the
patient died."
(5) Utilisation The f i f t h criterion is whether the pohcy or plan was used as a
frame of reference for operational decisions. This criterion, however, does not
simply generate a negative evaluation in the case when a pohcy or plan was not
foUowed i n making operational decisions. Rather, the reason for nonconformity is
ehcited in an exploration of the planning decisionmakers' and operational decision-
makers' decision situations.
Changes in the decision situation may offer sufficient reasons for nonconformity
to pohcies or plans, presenting an important role in the evaluation for the element
of uncertainty. However, decisionmakers, analysts, and plarmers also have an

y yes
n no
c complete
p partial
Figure 2. The PPPP (policy-plan-programme-project) evaluation sequence.
Planning and plan implementation 139

obligation to incorporate uncertainty into their policies and plans, i n the form of
prediction and projection of possible outcomes and context scenarios, flexibility,
and adaptability of adopted strategies. Accordingly, the evaluation includes
judgments about the degree to which changes in the decision situation could have
been predicted or anticipated (table 2).
These criteria are apphed sequentially in a series of questions that are shown in
table 2. Depending on the answer to each question that is suggested by an analysis
of the object of evaluation—a policy, plan, programme, or project—successive
questions are applied, as shown in figure 2. A positive, neutral, or negative
evaluation, then, is the result of the sequential application of each of the above
criteria wherever relevant, and all the criteria come into play in this process of
policy or plan evaluation.
The process of developing and implementing policies, plans and programmes is
complex, and evaluating that process cannot be simple either. A n approach is
proposed here that is more complex than the extremes of policy and plan evaluation
implied in the traditional model with its standard of conformity and the 'decision-
centred' model with its standard of utilisation.
Though more laborious than these, the above evaluation framework is feasible
with available analytical and interpretive tools, and, given the limits of subjectivity,
ideological bias, and historical reconstruction, reflects a more realistic approach.
A t least, it confronts what we can only describe as a dilemma: while policy and
plarming must face uncertainty, we must at the same time be able to judge policies,
plans, and their effects.

References
Alexander E R, 1981, " I f planning isn't everything, maybe it's something" Town Planning
Review 52 1 3 1 - 1 4 2
Alexander E R, 1982, "Design i n the decision making process" Policy Sciences 14 2 7 9 - 2 9 2
Alexander E R, 1985, "From idea to action: notes for a contingency theory of the policy
implementation process" Administration and Society 16 4 0 3 - 4 2 6
Alterman R, Carmon N , H i h M , 1984, "Integrated evaluation: a synthesis of approaches to
the evaluation of broad-aim social programs" Socio-Economic Planning Sciences 18
381-389
Campbell D T, Stanley J C, 1966 Experimental and Quasi Experimental Designs for Research
(Rand McNally, Chicago, I L )
Elmore R, 1979/80, "Backward mapping: implementation research and policy decisions"
Political Science Quarterly 94 6 0 1 - 6 1 6
Faludi A , 1986a Critical Rationalism and Planning Methodology (Pion, London)
Faludi A , 1986b, "Towards a theory of strategic planning" The Netherlands Journal of Housing
and Environmental Research 1 2 5 3 - 2 6 8
Faludi A , 1987 A Decision-centred View of Environmental Planning (Pergamon Press, Oxford)
Friedmann J, 1969, "Planning and societal action" Journal of the American Institute of Planners
35 311-318
Friedmann J, 1987 Planning in the Public Domain: From Knowledge to Action (Princeton
University Press, Princeton, NJ)
Friend J K, Jessop W N , 1977 Local Government and Strategic Choice (1st edition 1969)
(Pergamon Press, Oxford)
Gross B, 1971, "Planning in an era of social revolution" Public Administration Review 31
259-296
Habermas J, 1984 The Theory of Communicative Action. Volume 1: Reason and the
Rationalization of Society translated by T McCarthy (Beacon Press, Boston, M A )
Hall P, 1980 Great Planning Disasters (Weidenfeld and Nicolson, London)
Hatry H P, Winnie R E, Fisk D M , 1973 Practical Program Evaluation for State and Local
Government Officials (The Urban Institute, Washington, DC)
House E R, 1980 Evaluating with Validity (Sage, Beverly Hills, C A )
Levine R A , Solomon M A , Hellstern G M , Wolman H (Eds), 1981 Evaluation Research and
Practice: Comparative and International Perspectives (Sage, Beverly HiUs, C A )
140 E R Alexander, A Faludi

Mack R P, 1973 Planning on Uncertainty: Decision Making in Business and Government


Administration (John Wiley, New York)
Madsen R, 1983, "Use of evaluation research methods in planning and policy contexts"
Journal of Planning Education and Research 2 1 1 3 - 1 2 1
Mazmanian D A , Sabatier P A , 1981 Effective Policy Implementation (Lexington Books,
Lexington, M A )
Mazmanian D A , Sabatier P A , 1983 Implementation and Public Policy (Scott Foresman,
Glenview, I L )
Nakamura R T, Smallwood F, 1980 The Politics of Policy Implementation (St Martin's Press,
New York)
Patton M Q, 1980 Qualitative Evaluation Methods (Sage, Beverly Hills, C A )
Popper K R, 1959 Ttie Logic of Scientific Discovery (Hutchinson, London)
Postuma R, 1987, "Werken met het AUP: stadsuitbreiding van Amsterdam 1939-1955",
Werkstukken van het Planologisch en Demografisch Instituut, Universiteit van Amsterdam,
Amsterdam
Pressman J L , Wildavsky A , 1973 Implementation: How Great Expectations in Washington are
Dashed in Oakland (University of California Press, Berkeley, C A )
Rutman L , 1977, "Formative research and program evaluability", i n Evaluation Research
Methods: A Basic Guide Ed. L Rutman (Sage, Beverly Hills, C A ) pp 5 7 - 7 1
Sabatier P A , 1986, "Top-down and bottom-up approaches to implementation research: a
critical analysis and suggested synthesis" Journal of Public Policy 6 2 1 - 4 8
Tripodi T, Felhn P, Epstein I , 1971 Social Program Evaluation (Peacock Pubhcations, Ithaca,
NY)
Vickers Sir G, 1968 Value Systems and Social Process (Basic Books, New York)
Wallagh G, 1988, "Tussen mens en werking", Masters Thesis i n Planning, University of
Amsterdam, Amsterdam
Weiss C H , 1972 Evaluation Research: Models of Assessing Program Effectiveness (Prentice-Hall,
Englewood Cliffs, NJ)
Weiss C H , 1978, "Improving the linkage between social research and pubhc pohcy", in
Knowledge and Policy: The Uncertain Connection Ed. L E Lynn Jr (National Academy of
Science, Washington, DC) pp 2 3 - 8 1
Wholey J S, 1979 Evaluation: Promise and Performance (The Urban Institute, Washington,
DC)
Wildavsky A , 1973, " I f planning is everything, maybe it's nothing" Policy Sciences 4 1 2 7 - 1 5 3
Wildavsky A , 1979 Speaking Truth to Power (Litde, Brown, Boston, M A )
Williams W, 1975 Social Policy Research and Analysis (American Elsevier, New York)
WiUiams W, 1976, "Implementation analysis and assessment", in Social Program
Implementation Eds W Wilhams, R E Elmore (Academic Press, New York) pp 2 6 7 - 2 9 2

© 1989 a Pion publication printed in Great Britain

View publication stats

You might also like