You are on page 1of 16

Evaluation Approaches

Prepared by : Randel Roy G. Raluto


Methods

 This branch we have designated methods since in its purest form, it deals
with obtaining generalizability, or “knowledge construction,” as Shadish,
Cook, and Leviton (1991) refer to it.
Valuing

 the valuing branch. Initially inspired by the work of Michael Scriven (1967),
the valuing branch firmly establishes the vital role of the evaluator in valuing.
Those on this branch maintain that placing value on data is perhaps the most
essential component of the evaluator’s work. Some subsequent theorists
extend the evaluator’s role to include systematically facilitating the placing
of value by others (e.g., Guba & Lincoln, 1989).
Use

 which, with the pioneering work of Daniel Stufflebeam (initially with Egon
Guba) and the work of Joseph Wholey, originally focused on an orientation
toward evaluation and decision making. In essence, work done by theorists on
this branch expresses a concern for the way in which evaluation information
will be used and focuses on those who will use the information.
Donald Campbell Evaluation
 Shadish et al. (1991) call Donald Campbell the “Methodologist of the Experimenting
Society.”
 Most notable from an evaluation perspective are his papers on experimental and
quasi-experimental designs for research (Campbell, 1957; Campbell & Stanley,
1966). The focus on experimental designs (and the more practical quasi-
experimental designs) is an attempt to “rule out many threats precluding causal
inference” (Shadish et al., 1991, p. 122).
Three major areas of social science
research design
 First, the authors explained the conditions necessary to conduct a true
experimental study, where randomization is the hallmark.
 Second, two key notions relevant to this are articulated and explained.
 They called the degree to which an experiment is properly controlled internal
validity and referred to how widely applicable the results of an experiment
are as external validity.
 Third, they recognized that experiments are not perfect and that they should
not, and cannot, be used in a great many situations. Thus, as an alternative
to the true experiment, they describe, in great detail, quasi experimental
designs.
 Qualitative methods are helpful for evaluation areas such as implementation,
identification, and interpretation of side effects. Despite these forays into
broadening his methodological perspectives beyond experimental approaches,
 The ideas put forth in Campbell and Stanley’s manuscript are now the
foundation of almost all social science research methods courses. Perhaps
even more remarkable was this paper’s impact on the promotion of new ideas
related to research design. It could be argued that the introduction of and
favorable response to quasi-experimental designs helped give rise to a
climate that was apparently more accepting of alternative thinking about
social science research.
Robert Boruch
 Robert Boruch’s philosophy of evaluation is most similar to the work of Donald
Campbell in considering the randomized field experiment as the ideal
evaluation approach. Randomized field experiments, however, are distinct
from the quasi-experiments put forth by Campbell and Stanley in their
seminal piece, Experimental and Quasi-Experimental Designs for Research
(1966).
 Randomized field tests are also different from quasi-experiments [emphasis in
original]. The latter research designs have the object of estimating the
relative effectiveness of different treatments that have a common aim, just
as randomized experiments do, but they depend on methods other than
randomization to rule out the competing explanations for the treatment
differences that may be uncovered. Quasi-experiments and related
observational studies then attempt to approximate the results of a
randomized field test.
 For Boruch, evaluation is likened to conventional scientific research. It
involves the systematic application of rigorous randomized research designs
for measuring the extent of a social problem and assessing the
implementation, relative efficacy, and cost-effectiveness ratio of social
intervention programs (Boruch, Synder, & DeMoya, 2000).
 Boruch is steadfast in viewing the randomized field experiment as the most
effective way of obtaining the least-equivocal estimate of a social program’s
effects (Boruch et al., 2000).
 Boruch, McSweeney, and Soderstrom (1978) explain the randomized
experiment as an approach that randomly assigns individuals, or clusters of
individuals, to one of two treatment groups. This procedure makes certain
that at the outset, in the absence of program-induced differences, groups will
be statistically equivalent, thereby maximizing the internal validity of an
evaluation. The random assignment approach also avoids the interpretive
problems that affect alternative evaluation designs by assuring a fair
comparison.
 We have placed Boruch on the methods branch of the theory tree as a direct
continuation of the Campbell experimentation influence.
Michael Pattons

 The most prominent theoretical explication of the utilization (or use)


extension was developed by Michael Patton (1978, 1986, 1997).
 Patton maintains that the evaluator must seek out individuals who are likely
to be real users of the evaluation. Patton refers to them as “intended primary
users.”
 Patton presents a flow chart of utilization-focused evaluation (UFE) in which
the first step is the identification of intended users, including primary
intended users. The other four major phases of UFE are users’ commitment to
the intended focus of the evaluation and to evaluation utilization;
involvement in methods, design, and measurement; engagement, actively and
directly interpreting findings and making judgments; and making decisions
about further dissemination.
 the major finding of research on evaluation utilization that a primary factor
in obtaining use is the “personal factor” (Patton et al., 1977)
 The third major UFE concept emanating from the personal factor is the
dictum that the evaluator be “active—reactive—adaptive.” Evaluators need to
be active in identifying intended users and focusing questions, reactive in
continuing to learn about the evaluative situation, and adaptive “in altering
the evaluation questions and designs in light of their increased understanding
of the situation and changing conditions” (Patton, 1997, p. 135).
 Another addition is his introduction of the term developmental evaluation. In
this activity, “the evaluator becomes part of the program design team or an
organization’s management team, not apart from the team

You might also like