You are on page 1of 31

This article was downloaded by: [North Dakota State University]

On: 12 October 2014, At: 07:59


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954
Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,
UK

Journal of Educational and


Psychological Consultation
Publication details, including instructions for
authors and subscription information:
http://www.tandfonline.com/loi/hepc20

The Study of Implementation:


Current Findings From
Effective Programs that
Prevent Mental Disorders in
School-Aged Children
Celene E. Domitrovich & Mark T. Greenberg
Published online: 10 Nov 2009.

To cite this article: Celene E. Domitrovich & Mark T. Greenberg (2000) The Study
of Implementation: Current Findings From Effective Programs that Prevent Mental
Disorders in School-Aged Children, Journal of Educational and Psychological
Consultation, 11:2, 193-221, DOI: 10.1207/S1532768XJEPC1102_04

To link to this article: http://dx.doi.org/10.1207/S1532768XJEPC1102_04

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the
information (the “Content”) contained in the publications on our platform.
However, Taylor & Francis, our agents, and our licensors make no
representations or warranties whatsoever as to the accuracy, completeness,
or suitability for any purpose of the Content. Any opinions and views
expressed in this publication are the opinions and views of the authors, and
are not the views of or endorsed by Taylor & Francis. The accuracy of the
Content should not be relied upon and should be independently verified with
primary sources of information. Taylor and Francis shall not be liable for any
losses, actions, claims, proceedings, demands, costs, expenses, damages,
and other liabilities whatsoever or howsoever caused arising directly or
indirectly in connection with, in relation to or arising out of the use of the
Content.
This article may be used for research, teaching, and private study purposes.
Any substantial or systematic reproduction, redistribution, reselling, loan,
sub-licensing, systematic supply, or distribution in any form to anyone is
expressly forbidden. Terms & Conditions of access and use can be found at
http://www.tandfonline.com/page/terms-and-conditions
Downloaded by [North Dakota State University] at 07:59 12 October 2014
JOURNAL OF EDUCATIONAL AND PSYCHOLOGICAL CONSULTATION, 11(2), 193–221
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

The Study of Implementation:


Current Findings From Effective
Programs that Prevent Mental
Downloaded by [North Dakota State University] at 07:59 12 October 2014

Disorders in School-Aged Children


Celene E. Domitrovich and Mark T. Greenberg
Pennsylvania State University

Prevention science is a rapidly advancing field and is at the point where a num-
ber of preventive interventions have documented the ability to change devel-
opmental trajectories and reduce negative outcomes. Recently, reports sum-
marizing these “effective” programs have circulated among researchers and
practitioners. Surprisingly, many of the highest-quality programs fail to take
adequate steps to monitor and verify program integrity. This weakens the con-
clusions that can be drawn regarding the program outcomes and reduces the
likelihood that replications will resemble the original program. The next chal-
lenge facing the prevention field is to help consumers who are implementing
effective programs in naturalistic settings do so with quality and fidelity to the
original program so that they achieve similarly successful outcomes.
This article reviews implementation issues in prevention trials and specifi-
cally highlights the study of implementation in the 34 programs determined
to be effective in a recent review conducted by the Prevention Research Cen-
ter for the Center for Mental Health Services. Reasons for the lack of attention
to implementation and suggestions for way to incorporate implementation
measurement into prevention initiatives are discussed.

Over the past decade, there have been tremendous advances in the field of
prevention science, particularly in the theory, design, and evaluation of

Correspondence should be addressed to Celene E. Domitrovich, Prevention Research Cen-


ter, Pennsylvania State University, 109 South Henderson Building, University Park,
PA 16802. E-mail: cxd130@psu.edu
194 DOMITROVICH AND GREENBERG

prevention programs. Although research is continuing on the develop-


ment of new interventions and the establishment of their efficacy, a sub-
stantial number of studies have documented the beneficial impact that pro-
grams can have on changing developmental trajectories and reducing
negative outcomes. The promising research findings from this body of
work are beginning to influence public policy as federal, state, and local
governments are now calling for the utilization of empirically validated, ef-
fective models of intervention for children and families (Department of Ed-
Downloaded by [North Dakota State University] at 07:59 12 October 2014

ucation, 1998; National Institute of Mental Health, 1998; National Institute


on Drug Abuse, 1997). As efficacy trials have demonstrated positive effects,
research questions have begun to shift to an emphasis on determining how
to achieve successful outcomes in naturalistic settings with local ownership
of the intervention process (Institute of Medicine [IOM], 1994). The next
challenge facing prevention scientists is to help the consumers put “proven
programs” into place effectively, so that they reproduce the effective out-
comes shown when they were first developed and evaluated. Greater at-
tention must be given to both the measurement of dosage and the quality
and fidelity of intervention delivery, especially as empirically validated
prevention programs begin to “go to scale.”
A number of reports recently reviewed programs that demonstrate ef-
fectiveness. For example, the Center for the Study and Prevention of Vio-
lence at the University of Colorado at Boulder developed a series entitled
the Blueprints for Violence (Elliot, 1998), which is now being supported by
the Office of Juvenile Justice and Delinquency Prevention. Other “best
practice” publications covered the fields of violence prevention and school
safety (Drug Strategies, 1998), social-emotional learning (Greenberg, Zins,
Elias, & Weissberg, in press), positive youth development (Catalano,
Berglund, Ryan, Lonczak, & Hawkins, 1998), substance use (Drug Strat-
egies, 1996), and mental health (Olds, Robinson, Song, Little, & Hill, 1999;
Tobler Research Associates, 1988). Numerous other academic groups and
private foundations wrote similar documents, adding to the collection of
reviews available to the public.
To complement and extend those reports, we recently published a review
(commissioned by the Center for Mental Health Services of the Substance
Abuse Mental Health Services Administration) that summarized the cur-
rent state of knowledge on the effectiveness of preventive interventions for
mental health disorders in school-aged children (Greenberg, Domitrovich,
& Bumbarger, 1999; see http://www.psu.edu/dept/prevention). This in-
cluded universal (targeted to the general public), selected (targeted to indi-
viduals at higher than average risk), and indicated (targeted to high-risk
individuals with detectable signs of a disorder) prevention programs (IOM,
SCHOOL-BASED IMPLEMENTATION 195

1994). The goal of this article is to review issues in implementation of preven-


tion trials and specifically highlight the study of implementation in the 34
programs determined to be effective in this report (Greenberg et al., 1999).

DEFINING IMPLEMENTATION

Implementation quality, also referred to as treatment integrity (Dane &


Schneider, 1998; Gresham, 1989; Gresham, Gansle, Noell, Cohen, &
Downloaded by [North Dakota State University] at 07:59 12 October 2014

Rosenblum, 1993), fidelity (Moncher & Prinz, 1991), or adherence, was dis-
cussed by a variety of researchers and examined across a number of disci-
plines including psychology, education, and public health. Although there
is variation in how implementation is defined and measured, one basic def-
inition proposed by Yeaton and Sechrest (1981) is “the degree to which
treatment is delivered as intended” (p. 160). Similarly, Durlak (1995) de-
scribed implementation as “what a program consists of in practice” (p. 5)
and how much it is delivered according to how it was designed (Durlak,
1998). Although impact evaluation is the measuring of program outcomes,
process evaluation involves gathering data to assess the delivery of pro-
grams. Scheirer (1987) suggested that it is preferable to begin a program
evaluation by thinking that an unclear innovation is being provided in an
uncertain fashion. Accordingly, before measuring outcomes, a comprehen-
sive evaluation should specify the program components that are supposed
to be implemented and identify which ones are actually delivered. The
gaps between plans and delivery represent variation in implementation
(Scheirer, 1994).
In a recent review of primary and secondary prevention programs,
Dane and Schneider (1998) examined the extent to which five aspects of
implementation were verified in evaluation studies conducted between
1980 and 1990. In their report, program integrity was a multidimensional
construct defined as: (a) the degree to which program components were
delivered as prescribed (e.g., adherence), (b) the frequency and duration of
the program administered (e.g., dosage), (c) qualitative aspects of the pro-
gram delivery (e.g., content, affective quality), (d) participant responsive-
ness, and (e) program differentiation. Program differentiation referred to
any attempts by the program evaluators to verify the design conditions so
that only the experimental group received the intervention.

FACTORS THAT INFLUENCE IMPLEMENTATION

Weissberg (1990) noted that a combination of factors interact to influence


both the outcomes that a program produces as well as the program’s future
196 DOMITROVICH AND GREENBERG

viability. These include the content and structure of an intervention, the


manner in which it is implemented, relationships between program imple-
menters and participants, and a variety of system-level variables. Thus, to
adequately assess implementation, information is needed about the spe-
cific program components, how they were delivered, and the characteris-
tics of the context (e.g., individuals, setting) in which the program was con-
ducted (Dane & Schneider, 1998; Pentz et al., 1990).
Chen (1998) provided a conceptual model for factors that influence im-
Downloaded by [North Dakota State University] at 07:59 12 October 2014

plementation. These factors include characteristics of (a) the implementa-


tion system (i.e., process and structure of the implementation and training
system), (b) characteristics of the implementer (e.g., teacher and school
staff), and (c) characteristics of the setting in which the program is imple-
mented (e.g., school climate, principal support, and district support).
These factors are likely to influence both the implementation, itself, as well
as outcomes. Elias and colleagues (this issue) cogently discuss these issues
as they relate to best practices in implementing new programs. Greenberg,
Domitrovich, and Graczyk (2000) adapted Chen’s model for use in the spe-
cific context of school-based preventive interventions.

WHY MEASURE IMPLEMENTATION?

There are at least five rationales for conducting implementation research.


First, without implementation information it is impossible to know what
actually happened during an intervention trial. This includes what actually
took place, the quality of the program delivered, and whether the target au-
dience was reached. Once this information is established, it can be used to
explain variation in observed changes in outcomes. A second rationale is
that relating implementation quality to program outcomes is critical for es-
tablishing the internal validity of a program and it strengthens any conclu-
sions that are made about the program’s role in producing change (Durlak,
1998). Researchers conducting evaluation studies have been cautioned
about the danger of Type III error. This type of error occurs when one as-
sumes that the effects of an intervention are meaningful and conclusive,
when in reality it is delivered so poorly as to invalidate outcome analyses
(Dobson & Cook, 1980; Scanlon, Horst, Nay, Schmidt, & Waller, 1977). For
example, without measuring implementation quality, one may incorrectly
judge a program ineffective when, in fact, negative outcome findings are a
result of shortcomings in service delivery. Further, in controlled designs, it
is important to monitor implementation in both the treatment and control
groups to verify the study design. An intervention may appear ineffective
SCHOOL-BASED IMPLEMENTATION 197

if the control group receives some form of intervention outside the re-
searchers’ control. This situation is occurring more frequently, particularly
in schools or communities where multiple prevention initiatives are con-
ducted simultaneously without any coordination of services.
A third rationale for implementation research is to understand the inter-
nal dynamics and operations of an intervention program. These include
how the pieces of the program fit together, how the users of the program
interact (trainers, providers, and recipients), and the obstacles they face
Downloaded by [North Dakota State University] at 07:59 12 October 2014

and resolve. It helps the researchers understand the strengths and weak-
nesses of the program, and both anticipated and unanticipated conse-
quences of the intervention. A fourth use of implementation data is to
provide a source of ongoing feedback that is useful for continuous quality
improvement. A fifth rationale for studying implementation is to advance
knowledge on best practices for replicating, maintaining, and diffusing
programs in complex “real world” systems (Rogers, 1995; Scheirer, 1994).
This process requires testing program theory, establishing the essential
program components that contribute to outcomes, and understanding the
conditions that are necessary for successful implementation. Weissberg
and Greenberg (1998) noted that an important outcome of intervention re-
search may be the finding that sufficient quality of implementation for a
desired intervention cannot be achieved.

LIMITED ATTENTION TO IMPLEMENTATION

Given the avowed importance of implementation research in fully under-


standing how and under what conditions programs may be effective, it is
surprising that a key shortcoming in many preventive intervention studies
is that investigators assess program outcomes while failing to examine
most, or any, aspects of implementation. Durlak (1997) noted that less than
5% of over 1,200 published prevention studies provide data on program
implementation. A recent meta-analysis of indicated prevention programs
found that 68.5% of the programs were described too broadly to be repli-
cated and very few included measurement of treatment fidelity (Durlak &
Wells, 1998).
Gresham and his colleagues (Gresham et al., 1993) conducted a review of
school-based intervention studies published between 1980 and 1990. Using a
very basic definition of implementation, they coded the studies and found
that only 35% provided an operational definition of their intervention
through a detailed description or reference to a manual. Only 27 studies
(14.9%) systematically measured and reported levels of treatment integrity.
198 DOMITROVICH AND GREENBERG

The authors did not mention whether any individual studies related imple-
mentation to outcomes, but using meta-analytic techniques, they were able to
identify a significant relationship between effect size and treatment integrity.
More recently, Dane and Schneider (1998) examined program integrity
in studies of school-based behavioral interventions conducted between
1980 and 1994. The authors made a distinction between “promotion” and
“verification” of integrity. The use of a manual, formal training, and ongo-
ing consultation or support, were considered steps that programs took to
Downloaded by [North Dakota State University] at 07:59 12 October 2014

promote program integrity. In contrast, monitoring adherence or measur-


ing dosage were considered procedures that actually verified program in-
tegrity. All studies were examined for specific features that promoted
fidelity. Those that specified procedures for verifying integrity were coded
along five dimensions: adherence, dosage, quality of program delivery,
participant responsiveness, and program differentiation.
Dane and Schneider (1998) also examined prevention trials in which di-
mensions of program implementation (i.e., integrity or dosage measures)
were analyzed in relation to outcomes. The results confirmed the impor-
tance of integrity information, particularly measures of adherence and ex-
posure, for outcomes. In another study, outcomes were only evident when a
specific proportion of the program content was provided (e.g., Botvin,
Baker, Filazzola, & Botvin, 1990). Intervention effects were most often found
when trained observers, rather than service providers, were the source used
to gather data. The authors noted that the variability in the sources and as-
pects of integrity limited their ability to make firm conclusions about the ef-
fect of implementation on program efficacy (Dane & Schneider, 1998).
As these reviews indicate, the majority of clinical trials are conducted
without any source of implementation information. A growing number of
prevention programs, particularly in the substance abuse literature, moni-
tored implementation extensively and have shown that variability in the
quality of implementation is related to program outcomes (Basch, 1984;
Blakely et al., 1987; Botvin et al., 1990; Botvin, Baker, Dusenbury, Botvin, &
Diaz, 1995; Connell, Turner, & Mason, 1985; Hansen, Graham,
Wolkenstein, & Lundy, 1989; Gottfredson, Gottfredson, & Hybl, 1993;
Rohrbach, Graham, & Hansen, 1993; Ross, Luepker, Nelson, Saavedra, &
Hubbard , 1991; Sobol et al., 1989; Taggart, Bush, Zuckerman, & Theiss,
1990; Tricker & Davis, 1988).

THE CURRENT STUDY

A well-designed program that is based on a strong conceptual model is nec-


essary, but not sufficient, to produce behavior changes in target groups
SCHOOL-BASED IMPLEMENTATION 199

(Botvin et al., 1990; Connell et al., 1985). The goal of this article is to examine
issues regarding implementation within the context of school-based pre-
vention programs that were described as “effective” in our recent program
review. The report concluded that advances in theory, program develop-
ment, and scientific evaluation have led to important new findings show-
ing the promise of preventive approaches to reduce mental disorders in
childhood.
Downloaded by [North Dakota State University] at 07:59 12 October 2014

Background on Center for Mental Health Services Report

The review included prevention programs for children aged 5 to 18 that


produced improvements in specific psychological symptoms or in factors
directly associated with increased risk for child mental disorders. Pro-
grams were included if they had been evaluated using either a random-
ized-trial design or a quasi-experimental design that used an adequate
comparison group. Studies were required to have both pre- and post-find-
ings, and preferably also follow-up data to examine the duration and stabil-
ity of program effects. In addition, it was required that the programs have a
written manual that specifies the model and procedures to be used in the in-
tervention. It was also necessary to clearly specify the sample and their be-
havioral and social characteristics. Thirty-four different programs were
classified as effective and included in the final selection because they met
all of these criteria (see Table 1).

Overview of Implementation as Addressed in Effective


Programs

To examine implementation within the context of this report, the 34 effec-


tive programs were classified using a system based on the work of Dane
and Schneider (1998). All studies were examined for specific features that
related to program integrity. Similar to Dane and Schneider, a distinction
was made between strategies that promote integrity (e.g., manual, staff
training) and procedures (e.g., monitoring adherence, dosage) that verify
integrity. All of the programs in our report promoted integrity to some de-
gree. This was not surprising given that one of the inclusion criteria was the
use of a manual or detailed program description. Some programs took ad-
ditional steps to promote integrity by either including staff training or on-
going supervision and support. A little over half of the programs (56%) re-
ported that they used all three strategies (i.e., manual, training, and
supervision).
Downloaded by [North Dakota State University] at 07:59 12 October 2014

TABLE 1
Effective Programs for the Prevention of Mental Health Disorders in School-Aged Children.

200
Program (Level of Intervention) Reference(s)

Adolescent Transitions Program (Indicated). Andrews, Solomon, & Dishion (1995); Dishion & Andrews (1995); Dishion, Andrews,
Kavanagh, & Soberman (1996); Irvine, Biglan, Smolkowski, Metzler, & Ary (in press).
Anger Coping Program (Indicated). Lochman (1985, 1992); Lochman, Burch, Curry, & Lampron (1984); Lochman & Lampron
(1988); Lochman, Lampron, Gemmer, Harris, & Wyckoff (1989); Lochman & Wells
(1996).
Attributional Intervention/ Brainpower Pro- Hudley & Graham (1993, 1995).
gram (Indicated).
Big Brothers/Big Sisters Program (Selected). Grossman & Tierney (1998); Tierney, Grossman, & Resch (1995).
Child Development Project (Universal). Battistich, Schaps, Watson, & Solomon (1996); Solomon, Watson, Battistich, Schaps, &
Delucchi (1996); Solomon, Watson, Delucchi, Schaps, & Battistich (1988); Watson,
Battistich, & Solomon (1997).
Children of Divorce Intervention Program Alpert-Gillis, Pedro-Carroll, & Cowen (1989); Pedro-Carroll, Alpert-Gillis, & Cowen
(Selected). (1992); Pedro-Carroll & Cowen (1985).
Children of Divorce Parenting Program Wolchik et al. (1993).
(Selected).
Coping With Stress Course (Selected). Clarke et al. (1995).
Counselors CARE and Coping and Support Randell, Eggert, & Pike (in press).
Training (Indicated).
Earlscourt Social Skills Group Training Pepler, King, Craig, Byrd, & Bream (1995).
(Indicated).
Family Bereavement Program (Selected). Sandler et al. (1992).
FAST Track (Universal, Selected & Indicated Conduct Problems Prevention Research Group (1992, 1998, 1999a, 1999b).
components).
First Steps to Success (Selected). Walker et al. (1998); Walker, Stiller, Severson, Feil, & Golly (1998).
Good Behavior Game (Universal). Dolan et al. (1993); Kellam, Ling, Merisca, Brown, & Ialongo (1998); Kellam, & Rebok
(1992); Kellam, Rebok, Ialongo, & Mayer (1994).
Improving Social Awareness - Social Prob- Bruene-Butler, Hampson, Elias, Clabby, & Schuyler (1997); Elias et al. (1986); Elias, Gara,
lem Solving (Universal). Schuyler, Branden-Muller, & Sayette (1991).
Downloaded by [North Dakota State University] at 07:59 12 October 2014

Interpersonal Cognitive Problem - Solving Shure & Spivack (1982, 1988); Shure (1979, 1988, 1997).
(Universal).
Intervention Campaign Against Bully-Victim Olweus (1991, 1993, 1994).
Problems (Universal).
Linking the Interests of Families and Reid, Eddy, Fetrow, & Stoolmiller (in press).
Teachers (Universal).
Montreal Longitudinal Experimental Study McCord, Tremblay, Vitaro, Desmarais-Gervais (1994); Tremblay, Masse, Pagani, & Vitaro
(Indicated). (1996); Tremblay, Masse et al. (1992); Tremblay, Vitaro et al. (1992); Vitaro & Tremblay
(1994).
Peer Coping Skills Training (Indicated). Prinz, Blechman, & Dumas (1994).
Penn Prevention Program (Selected). Gillham, Reivich, Jaycox, & Seligman (1995); Jaycox, Reivich, Gillham, & Seligman (1994).
Positive Youth Development Program Caplan et al. (1992); Weissberg, Barton, & Shriver (1997).
(Universal).
Promoting Alternative THinking Strategies Greenberg, Kusche, Cook, & Quamma (1995); Greenberg & Kusche (1993, 1996, 1997,
(Universal). 1998a, 1998b); Conduct Problems Prevention Research Group. (1999b).
Primary Mental Health Project (Selective). Cowen et al. (1996); Lorion, Caldwell, & Cowen (1976); Weissberg, Cowen, Lotyczewski,
& Gesten (1983); Cowen, Gesten, & Wilson (1979); Hightower (1997).
Queensland Early Intervention and Preven- Dadds et al. (1999); Dadds, Spence, Holland, Barrett, & Laurens (1997).
tion of Anxiety Project (Indicated).
Responding in Peaceful and Positive Ways Farrell, Meyer, & White (1998).
(Universal).
School Transitional Environment Project Felner & Adan (1988); Felner et al. (1993); Felner, Ginter, & Primavera (1982).
(Universal).
Seattle Social Development Project (Universal). Hawkins, Catalano, Kosterman, Abbott, & Hill (in press); Hawkins et al. (1992); Hawkins,
Von Cleve, & Catalano (1991); O’Donnell, Hawkins, Catalano, Abbott, & Day (1995).
Second Step Violence Prevention Curricu- Grossman et al. (1997).
lum (Universal).
Social Relations Program (Selected). Lochman, Coie, Underwood, & Terry (1993).
Stress Inoculation Program I (Selective). Hains & Szyjakowski (1990).
Stress Inoculation Program II (Selective). Kiselica, Baker, Thomas, & Reedy (1994).
Suicide Prevention Program I (Universal). Klingman & Hochdorf (1993).
Suicide Prevention Program II (Universal). Orbach & Bar-Joseph (1993).

201
202 DOMITROVICH AND GREENBERG

There was considerable variability in the type and number of integrity


dimensions that were actually verified across programs. Overall, 76% (26
of 34) of the effective programs verified program integrity in some way.
We included the same aspects of program integrity that Dane and Schnei-
der (1998) used in their review with the exception of “quality of program
delivery”. The four verification procedures coded in the current study
were fidelity and adherence (i.e., whether key components of their inter-
vention were delivered as prescribed), dosage (i.e., the amount of the ser-
Downloaded by [North Dakota State University] at 07:59 12 October 2014

vice delivered), participant responsiveness (i.e., degree of participant


satisfaction or involvement), and program differentiation (i.e., verify con-
tent of experimental conditions). Of these dimensions, fidelity and dosage
were the two aspects of implementation that were monitored most often.
Twenty programs (59%) included some rating of fidelity and adherence
in their implementation data. For the majority of these studies this in-
volved tracking the program’s essential components with ratings made by
independent observers or the program implementors. In 3 of the 20 stud-
ies, fidelity was assessed indirectly; high fidelity was assumed when a sig-
nificant difference was found between program participants and controls
along a behavioral dimension (e.g., teacher practices, student perceptions)
that was a target mediator of the intervention. Although this method pro-
vided important information it cannot verify that the behavioral changes
were not function of some factor unrelated to the intervention. Regarding
the other dimensions of implementation, dosage (i.e., the amount of an in-
tervention administered to participants) was reported in 33% of the stud-
ies. Four programs (12%) assessed participant responsiveness, and two
programs (6%) assessed program differentiation.
Interestingly, only 11 of the 34 studies (32%) utilized implementation infor-
mation as a source of data for outcome analyses. In some cases, descriptive
statistics were conducted on the implementation information but the data
was not related to program outcomes. Four studies examined dosage-re-
sponse relationships (i.e., Promoting Alternative Thinking Strategies; Anger
Coping; Adolescent Transition Project; Anti-Bullying Campaign) and results
indicated that higher quantities of the intervention were related to better out-
comes. Seven studies used fidelity and adherence ratings to examine whether
quality of implementation was related to outcomes. When significant results
were found, higher fidelity was related to stronger program outcomes.

EXAMPLES OF EXEMPLARY PROGRAMS

Four programs (Child Development Project; Promoting Alternative


Thinking Strategies; Social Decision Making and Social Problem Solving
SCHOOL-BASED IMPLEMENTATION 203

Project; Seattle Social Development Project) will be described in greater de-


tail because they represent exemplary examples of how implementation in-
formation is used to improve the quality of prevention research. These pro-
grams verified fidelity and adherence with an independent observer and
related implementation information to program outcomes.

Child Development Project


Downloaded by [North Dakota State University] at 07:59 12 October 2014

The Child Development Project (CDP) is a universal prevention program


that focuses primarily on changing the school ecology to create schools
that are caring communities of learners. CDP provides school staff train-
ing in the use of cooperative learning and a language arts model that fos-
ters cooperative learning, as well as a developmental approach to disci-
pline that promotes self-control. School-wide community-building
activities are used to promote school bonding, and parent involvement
activities reinforce the family–school partnership. The program was eval-
uated over 4 years (baseline followed by three intervention years) in a
quasi-experimental, multisite demonstration trial involving approxi-
mately 4,500 third- through sixth-grade students in 24 diverse schools
throughout the United States (Battistich, Schaps, Watson, & Solomon,
1996; Battistich, Schaps, Watson, Solomon, & Lewis, in press; Watson,
Battistich & Solomon, 1997).
Implementation information was gathered each year of the project both
through classroom observations and teacher questionnaire instruments. A
composite index of program implementation was constructed by averag-
ing scores from these sources (Watson et al., 1997). Seven dimensions of
teacher behavior (promotion of autonomy, use of cooperative learning,
promotion of positive social behavior, personal relating, minimizing exter-
nal control, emphasis on extrinsic motivation, elicitation of student think-
ing and discussion) were coded from the observations. In addition, four
aspects of teacher attitudes (optimism about student learning, trust in stu-
dents, belief in student self-direction, belief in promoting autonomy) were
drawn from questionnaires. Analyses examining program implementa-
tion over different years of the project revealed discriminant validity as
well as variability when teachers in program schools were compared to
those in comparison schools (Battistich et al., 1996, in press; Watson et al.,
1997). Schools were classified as either high, moderate, or low based on the
degree of positive change made by teachers from baseline. Program out-
comes for the entire sample at the end of the trial were mixed and as many
outcomes favored comparison students as program students. The authors
reconducted the analyses with the five high implementation program
204 DOMITROVICH AND GREENBERG

schools arguing that the effects of the program were more appropriate to
interpret under higher implementation conditions. Watson et al. (1997)
found that results for students in these program schools were positive and
statistically significant, through some were relatively small (effect sizes
ranging from .13 to .47). The strongest effects were found on students’
sense of the school as a community and intrinsic prosocial motivation.
Given these findings, two similar sets of analyses were conducted on
the problem behavior data. When the entire sample was included in analy-
Downloaded by [North Dakota State University] at 07:59 12 October 2014

ses there was little program support for program impacts (i.e., only 2 of 19
univariate planned comparisons favored program students). A number of
significant differences were found between students in high-change pro-
gram and comparison schools. Students in high change schools reported
less alcohol use (p < .05) and less marijuana use (p < .01). Significant effects
favoring the program students were also found on academic outcomes
(Battistich et al., in press). In total, significant effects favoring program stu-
dents in the high implementation schools were found for 52% of the out-
come variables with effect sizes ranging from .09 to .33 (Battistich et al., in
press). Overall, students in the five high change schools increased signifi-
cantly in the degree to which they felt their school was a community dur-
ing the three intervention years, and students in the other program schools
actually declined compared to matched nonprogram schools. The authors
noted that this measure was considered a “proxy indicator” of effective
program implementation (Watson et al., 1997).

Promoting Alternative Thinking Strategies (PATHS)

PATHS is an elementary-based program to promote social and emotional


competence with a central emphasis on teaching students to identify, un-
derstand, and self-regulate their emotions. Greenberg and colleagues
(Greenberg & Kusche, 1997, 1998a, 1998b; Greenberg, Kusche, Cook, &
Quamma, 1995) conducted several randomized controlled trials of PATHS
with a variety of populations (e.g., with regular education students, with
deaf children, with behaviorally at-risk students, and as a universal inter-
vention in a multicomponent comprehensive program). In a randomized
controlled trial with 200 second- and third-grade regular education stu-
dents, PATHS produced significant improvements in social problem solv-
ing and understanding of emotions at post-test. Compared to control, gen-
eral education intervention children show 1-year follow-up improvements
on social problem-solving, emotional understanding, self-report of con-
duct problems, teacher ratings of adaptive behavior, and cognitive abilities
SCHOOL-BASED IMPLEMENTATION 205

related to social planning and impulsivity (Greenberg & Kusche, 1997,


1998a; Greenberg et al., 1995).
These improvements were maintained at 1-year follow-up and, more
importantly, additional significant reductions in teacher and student re-
ports of conduct problems appeared at 2-year follow-up. For children with
special needs, results indicated posttest improvement on teacher-rated so-
cial competence, child report of depressive symptoms, and emotional un-
derstanding and social-cognitive skills. At 1-year and 2-year follow-up,
Downloaded by [North Dakota State University] at 07:59 12 October 2014

both teachers and children separately reported significant improvements


in both internalizing (e.g., depression and somatic complaints) and
externalizing behavior problems, as well as improved social planning and
decreased cognitive impulsivity (Greenberg & Kusche, 1997, 1998b;
Greenberg et al., 1995).
Recently, the Conduct Problem Prevention Research Group (CPPRG;
1999) examined the effects of PATHS in the context of a larger conduct
problem preventive intervention. As the universal component of the Fast
Track Program, PATHS was provided to all children in the intervention
schools of that project. In this article, the outcomes following the first year
of the program were presented for three consecutive cohorts of first-grade
children. The results indicated that students in the intervention classrooms
were less aggressive according to peers and intervention classrooms were
rated by observers as more having a more positive atmosphere than con-
trol classrooms (CPPRG, 1999).
Two measures were used to assess the quality of implementation
during this trial. To obtain a measure of program dosage, teachers
were asked to report the number of lessons that they taught each week.
In addition, program fidelity was assessed by the project staff with di-
rect observations of the teachers in the context of their ongoing consul-
tation. Four dimensions were coded: (a) quality of teaching PATHS
concepts, (b) modeling of PATHS concepts throughout the day, (c)
quality of classroom management during PATHS lessons, and (d)
openness to consultation from staff members. No effects for dosage
were found. To examine quality of implementation effects, outcome
measures were examined in which significant intervention versus con-
trol differences had already been established. In all cases, ratings of
teacher skill in program implementation and classroom management
predicted classroom differences in positive program outcomes. Spe-
cifically, the teacher’s rated skill in teaching PATHS concepts, manag-
ing the classroom, and modeling and generalizing PATHS concepts
throughout the classroom day were all significantly related to teacher
ratings of Authority-Acceptance (p < .001, p < .001, p<.001, respec-
206 DOMITROVICH AND GREENBERG

tively). These three measures were also related to observer ratings of


classroom atmosphere (p < .01, p < .01, p < .01, respectively).

Social Decision-Making and Social Problem Solving


(SDS-SPS)

SDS-SPS targets the transition to middle school as a normative life event,


which places children at increased risk for poor outcomes (Bruene-Butler,
Downloaded by [North Dakota State University] at 07:59 12 October 2014

Hampson, Elias, Clabby, & Schuyler, 1997). SDS-SPS, previously known as


Improving Social Awareness-Social Problem Solving (ISA-SPS), focuses on
individual skill building to promote social competence, decision making,
group participation, and social awareness. The 2-year program is given to
students before their transition to middle school, as it seeks to bolster stu-
dents’ resilience in the face of the many stresses related to school change.
Using a quasi-experimental design, Elias and colleagues (Bruene-Butler et
al., 1997; Elias, Gara, Schuyler, Branden-Muller, & Sayette, 1991) found im-
provements in youth self-report of coping with stressors related to middle
school transition and teacher reports of behavior. Six years later, children
who received the program continued to appear better adjusted than chil-
dren in the comparison schools. Boys who did not receive the program had
higher rates of involvement with alcohol, violent behavior toward others,
and self-destructive and identity problems compared to those in the pro-
gram. Girls who did not receive the program reported higher rates of ciga-
rette smoking, chewing tobacco, and vandalism relative to girls who were
in the program.
From 1979 when SDS-SPS began, one of the primary goals has been de-
tailed monitoring of the implementation process (Elias et al., 1986). In one
trial conducted by Elias et al. (1991), the design of the study allowed the au-
thors to examine program fidelity in relation to outcomes. Within the ex-
perimental group, two cohorts of students received the intervention at
different levels of fidelity (the E2 group received a higher quality program
than the E1 group whose training was considered “moderate”). These clas-
sifications were based on ratings made by on-site implementers and con-
sultants during the training. In the majority of the analyses the two
experimental groups were not differentiated. The exception to this was
one of the final analyses, which was conducted separately for boys and
girls. When only the variables that differentiated program and control girls
were examined (based on results of an earlier discriminant function analy-
sis), significant group differences were found between the high- and mod-
erate-implementation groups on the students’ behavioral competence,
on-the-job performance, and self-efficacy.
SCHOOL-BASED IMPLEMENTATION 207

Seattle Social Development Project

The Seattle Social Development Project (Hawkins et al., 1992) is a comprehen-


sive universal prevention program that addresses multiple risk and protec-
tive factors across both individual and ecological domains (individual,
school, and family). With a strong emphasis on creating and maintaining
strong school and family bonds, the program combines modified teacher
practices and parent training across a 6-year intervention period. Class-
Downloaded by [North Dakota State University] at 07:59 12 October 2014

room teachers were trained in SSDP instructional methods with three ma-
jor components: proactive classroom management, interactive teaching,
and cooperative learning. These teaching approaches were used in combi-
nation with (a) classroom-based cognitive and social skills training in first
and sixth grade; and (b) parent training that emphasized child behavior
management in first or second grade, academic support in second or third
grade, and preventing drug use and antisocial behavior in fifth or sixth
grade.
The Seattle Social Development Program utilized two instruments to
document implementation (Hawkins et al., 1992; Hawkins, Von Cleve, &
Catalano, 1991; O’Donnell, Hawkins, Catalano, Abbott, & Day, 1995). The
first, a teacher self-report instrument, was completed weekly by teachers
in the experimental classrooms to provide project staff with information
regarding the implementation of program elements. Teachers were also
observed by trained observers (blind to experimental conditions) who
used a structured observation system (The Interactive Teaching Map;
Kerr, Kent, & Lam, 1985). Observations were made twice a year for 50 min
on two consecutive days. The data from these observations was trans-
formed into a single implementation score that reflected the teacher’s ap-
propriate use of the targeted teaching strategies. The data was used to
confirm that experimental teachers implemented the experimental teach-
ing practices significantly more than teachers of control participants did.
For 5 of the 6 intervention years, intervention teachers used practices that
were taught by the program significantly more than control teachers did.
O’Donnell et al. (1995) included student ratings of teachers’ use of coopera-
tive methods and opportunities for classroom involvement as an addi-
tional measure of implementation. There were significant group
differences on perceptions of teacher practices (both dimensions) but only
for girls.
Until recently, the implementation data collected during the program
was not included as a variable in outcome analyses (Hawkins et al., 1991,
1992; O’Donnell et al., 1995). Abbott et al. (1998) found that after control-
ling for baseline levels on measures, teachers’ degree of implementation
208 DOMITROVICH AND GREENBERG

(i.e., use of experimental instructional and management methods) was sig-


nificantly predictive of opportunities for student involvement, actual
classroom involvement, reinforcement for involvement, and higher stu-
dent bonding to school.

DISCUSSION
Downloaded by [North Dakota State University] at 07:59 12 October 2014

The results of the current study indicate that among the 34 programs desig-
nated as effective in reducing child problem behavior there is considerable
variability to the extent that attention has been paid to the measurement of
implementation. Most studies report that they have measured adherence
and fidelity or dosage. However, there were only seven studies (21%) in
which more than one implementation dimension (i.e., fidelity and adher-
ence, dosage, participant responsiveness, and program differentiation)
was assessed within the same evaluation. No investigator measured more
than two dimensions at one time. In addition, about one third of the pro-
grams linked variability in implementation indices to differences in pro-
gram outcomes. Thus, although we know that there are main effects of
these prevention models, little is known about how the degree to which
they operate as designed is related to their effectiveness. As a result, there is
a dearth of information in the published literature to guide schools and
communities in decision making regarding program adoption and replica-
tion. There are probably a number of reasons for these omissions. These in-
clude the relative lack of focus on implementation as compared to out-
comes on the part of both funders and journal reviewers, as well as an
overemphasis on the goal of obtaining main effects for intervention pro-
grams to demonstrate that prevention programs are effective use of re-
sources in education and human services.
Given the current state of implementation research, we use the remain-
der of the discussion for two purposes: (a) to provide guidelines for ad-
vancing the study of implementation, and (b) to suggest strategies for
consultants and schools that would enhance the process of program adop-
tion and measurement of implementation.

The Importance of Qualitative Information

In contrast to the absence of formal measurement or reports on implemen-


tation, discussions with program developers and school consultants often
reveal a great deal of valuable information on dissemination that does not
SCHOOL-BASED IMPLEMENTATION 209

often enter the professional literature (see Elias et al., this issue; Weissberg,
1990). Most of this information is qualitative in nature, a wisdom literature
generated from personal experiences and observations of program imple-
mentation in context. It is unfortunate and shortsighted that the accumu-
lated wisdom regarding implementation has not been viewed as necessary
in the reporting of prevention trials. It is heartening that the growing inter-
est in implementation has lead to a recent emphasis on its importance
(Gager & Elias, 1997; Journal of Educational and Psychological Consultation,
Downloaded by [North Dakota State University] at 07:59 12 October 2014

Vol. 11, No. 1 and this issue). For example, in the reporting of the Blueprints
programs, it is required that a chapter be devoted to factors that impact im-
plementation and best practices to improve implementation (Elliot, 1998).
An implication of the value and need for qualitative data is that when
consultants or schools consider program adoption, they should not rely
solely on published quantitative data of effectiveness. In addition, they are
advised to talk with sites already using the program to gain local insights
into the practical issues associated with implementation of the program.

Program Theory and Measurement of Implementation

A weakness in many programs is the absence of a clearly articulated


model or theory of how the intervention produces change. Durlak (1995,
1998) suggested that a starting point for measuring implementation is for
a program to specify its program components, or active ingredients. Pro-
gram components should be observable and include all materials and ac-
tivities used in the intervention (Scheirer, 1994). It is unclear from the
description of most of the 34 interventions if factors such as the quality of
trainers, quality of training, feedback from teachers or other deliverers,
or the activity of other system members (actions of school principals, so-
cial service directors, or supervisors) are considered active ingredients.
Once the program’s active ingredients are established, an objective as-
sessment system is needed to monitor the quality and quantity of the
program (Durlak, 1998). Ideally independent, unbiased observers should
be used, though as this review revealed, many programs rely on self-re-
port ratings by the change agent or changes in behavioral dimensions
theoretically linked to the intervention.
Chen (1998) argued that although an intervention is the major change
agent in a program, the “implementation system” is likely to make an im-
portant contribution to program outcomes. The implementation system
provides the means and a context for the intervention and is affected by a
number of factors such as characteristics of the implementers, the nature of
210 DOMITROVICH AND GREENBERG

the implementing organization, and the quality of the linkages between


this organization and the broader community (Chen, 1998). For example, it
is common in school-based programs for outside trainers to be employed
to train teacher and support staff in the program model. Thus, the quality
of the training, the experience of the trainers with the model, and the possi-
ble provision of ongoing support to the implementation, are examples of
critical areas for investigation. At present, the implementation or training
system, itself, has been subject to little or no research and is an area in need
Downloaded by [North Dakota State University] at 07:59 12 October 2014

of investigation to understand quality of implementation.


These findings suggest that there are a number of considerations for
consultants to ponder when considering the adoption and implementation
of a new program: (a) Assess the clarity of the program’s theory and how it
directly relates to how staff should be trained and supported. (b) If training
is required before using a program, the quality of the training should be as-
sessed; this includes the skills needed to become proficient in the program
and the knowledge of the participants’ skills before and after the training.
(c) When implementation begins, determine whether or not the essential
components are actually being delivered; just because a lesson is com-
pleted does not mean that the essential points or skills were well covered.
(d) Information on the quality of the delivery of the program should be
gathered, ideally using ratings by an unbiased observer, or at a minimum
the program deliverer. Quality of delivery may be assessed along various
dimensions, including efficacy, affective quality, and responsiveness. (e)
Consider the use of satisfaction surveys to assess participants (and teach-
ers) enthusiasm for a program, whether it was useful for them, and
whether they benefited from the program.

Adaptation, Fidelity, and the Challenge of


Implementation

Assessing implementation is a complicated process because the “gaps” be-


tween plans and delivery may be positive or negative (Scheirer, 1994). This
issue is particularly relevant when considering the content of specific pro-
grams. It is likely that successful implementation requires more than just
faithfully replicating program components. Elias (1997) and others
(Eveland, Rogers, & Klepper, 1977; Meyer, Miller, & Herman, 1993; Rogers,
1978) noted that interventions are often adjusted to meet the needs and ca-
pacity of local communities, or to allow consumers to gain ownership of
programs. There has been some argument in the literature regarding the
degree to which this type of “reinvention” reduces or improves program
SCHOOL-BASED IMPLEMENTATION 211

effectiveness. This debate between fidelity and adaptation is critical to con-


sider (Dane & Schneider, 1998). Some researchers are concerned that effi-
cacy is compromised if programs are adapted to specific features of adopt-
ing sites. Others argued that implementation quality may be compromised
if service providers are not able to modify programs to meet their needs.
We see these as empirical questions that should be subjected to experimen-
tal trials; that is, systematic variations in the adaptation of empirically-vali-
dated prevention programs (e.g., modifications in the program content,
Downloaded by [North Dakota State University] at 07:59 12 October 2014

models of training, types and characteristics of the deliverers, and dosage)


should be studied.
Some research has already begun to develop systematic methods to
operationalize the core program components of social and educational in-
novations (Blakely et al., 1987; Fullan & Stiegelbauer, 1991; Hall & Hord,
1987). Structured interviews and observations have been used to examine
the degree to which components are implemented with fidelity to a vali-
dated program model, or adapted by program implementors to meet per-
ceived ecological needs of the context in which the program is being
delivered.
It is very likely that as educational and psychological consultants sup-
port the adoption of programs for their schools and communities, it will be
necessary from the outset to make local adaptations. Further, over time it is
likely that additional changes will be made by local program deliverers.
Although these may be both necessary and effective, it is important from
the outset to document departures from the original program’s form.
Without information on these changes it is impossible to determine
whether these changes have improved or undermined the effectiveness of
the program.

Moving From Effective Adoption to Institutionalization

As prevention research progresses to more widespread, community-based


applications (i.e., dissemination and diffusion), another critical research
question involves identifying what factors increase the potential for
institutionalization of effective preventive interventions. It is clear that
there is a need for more research to better understand the organizational
structures and policies of schools that are necessary to support long-term
implementation of quality prevention programs. Consultants will find
valuable information on factors to consider in supporting long-term imple-
mentation and institutionalization in the recent volume, Promoting Social
and Emotional Learning; Guidelines for Educators (Elias et al., 1997).
212 DOMITROVICH AND GREENBERG

Throughout the country there has been a dramatic increase of interest in


utilizing empirically based program to both prevent problem behavior as
well as to promote resiliency. School districts and social service agencies
are developing greater awareness of the need to utilize empirically vali-
dated programs and their “best practices.” When applying these programs
to the specific context of a school or community there are numerous chal-
lenges in creating readiness, developing and effective model of training,
garnering contextual support, monitoring implementation, and evaluat-
Downloaded by [North Dakota State University] at 07:59 12 October 2014

ing outcomes. We highlighted both effective programs and issues to con-


sider in program adoption and the measurement of implementation. To
effectively implement and maintain quality prevention programs in our
schools, and communities, educational and psychological consultants will
play an important role in the measure and support of the implementation
process.

ACKNOWLEDGMENT

For additional information concerning the Prevention Research Center see


www.psu.edu/dept/prevention.

REFERENCES

Abbott, R. D., O’Donnell, J., Hawkins, D., Hill, K. G., Kosterman, R., & Catalano, R. F. (1998).
Changing teaching practices to promote achievement and bonding to school. American
Journal of Orthopsychiatry, 68, 542–552.
Basch, C. E. (1984). Research on disseminating and implementing health education programs
in schools. Journal of School Health, 54, 57–66.
Battistich, V., Schaps, E., Watson, M., & Solomon, D. (1996). Prevention effects of the Child De-
velopment Project: Early findings from an ongoing multi-site demonstration trial. Journal
of Adolescent Research, 11, 12–35.
Battistich, V., Schaps, E., Watson, M., Solomon, D., & Lewis, C. (in press). Effects of the Child
Development Project on students’ drug use and other problem behaviors, Journal of Pri-
mary Prevention.
Blakely, C. H., Mayer, J. P., Gottschalk, R. G., Schmitt, N., Davidson, W. S., Roitman, D. B., &
Emshoff, J. G. (1987). The fidelity-adaptation debate: Implications for the implementation
of public sector social programs. American Journal of Community Psychology, 15, 253–268.
Botvin, G. J., Baker, E., Dusenbury, L., Botvin, E. M., & Diaz, T. (1995). Long-term follow-up re-
sults of a randomized drug abuse prevention trial in a white middle-class population. Jour-
nal of the American Medical Association, 273, 1106–1112.
Botvin, G. J., Baker, E., Dusenbury, L., Tortu, S., & Botvin, E. M. (1990). Preventing adolescent
drug abuse through a multi-modal cognitive-behavioral approach: Results of a 3-year
study. Journal of Consulting and Clinical Psychology, 58, 437–446.
SCHOOL-BASED IMPLEMENTATION 213

Botvin, G. J., Bauer, E., Filazzola, A., & Botvin, E. M. (1990). A cognitive-behavioral ap-
proach to Substance abuse prevention: A one-year follow-up. Addictive Behavior, 15,
47–63.
Bruene-Butler, L., Hampson, J., Elias, M., Clabby, J., & Schuyler, T. (1997). The Improving So-
cial Awareness-Social Problem Solving Project. In G. W. Albee & T. P. Gullotta (Eds.), Pri-
mary prevention works (pp. 239–267). Thousand Oaks, CA: Sage.
Catalano, R. F., Berglund, M. L., Ryan, J. A. M., Lonczak, H. C., & Hawkins, J. D. (1998). Positive
youth development in the United States: Research findings on evaluations of positive youth develop-
ment programs (NICHD Publication). Washington, DC: U. S. Department of Health and Human
Services.
Downloaded by [North Dakota State University] at 07:59 12 October 2014

Chen, H. (1998). Theory-driven evaluations. Advances in Educational Productivity, 7, 15–34.


Conduct Problems Prevention Research Group. (1999). Initial impact of the Fast Track preven-
tion trial for conduct problems: II. Classroom effects. Journal of Consulting and Clinical Psy-
chology, 67, 648–657.
Connell, D. B., Turner, R. R., & Mason, E. F. (1985). Summary of the findings of the School
Health Education Evaluation: Health promotion effectiveness, implementation, and costs.
Journal of School Health, 55, 316–323.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary pre-
vention: Are implementation effects out of control. Clinical Psychology Review, 18, 23–45.
Department of Education. (1998). Applying effective strategies to prevent or reduce substance abuse,
violence, and disruptive behavior among youth. Rockville, MD: Scattergood, Dash, Epstein, &
Adler.
Dobson, D., & Cook, T. J. (1980). Avoiding type III error in program evaluation: Results from a
field experiment. Evaluation and Program Planning, 3, 269–276.
Drug Strategies (1996). Making the grade: A guide to school drug prevention programs. Washing-
ton, DC: Author.
Drug Strategies (1998). Safe schools, safe students: A guide to violence prevention strategies. Wash-
ington, DC: Author.
Durlak, J. A. (1995). School-based prevention programs for children and adolescents. Thousand
Oaks, CA: Sage.
Durlak, J. A. (1997). Successful prevention programs for children and adolescents. New York:
Plenum.
Durlak, J. A. (1998). Why program implementation is important. Journal of Prevention and Inter-
vention in the Community, 17, 5–18.
Durlak, J. A., & Wells, A. M. (1998). Evaluation of indicated preventive intervention (second-
ary prevention) mental health programs for children and adolescents. American Journal of
Community Psychology, 26, 775–802.
Elias, M. J. (1997). Reinterpreting dissemination of prevention programs as widespread imple-
mentation with effectiveness and fidelity. In R. P. Weissberg & T. P. Gullotta, (Eds.),
Healthy children 2010: Establishing preventive services. Issues in children’s and families’ lives
(Vol. 9, pp. 253–289). Thousand Oaks, CA: Sage.
Elias, M. J., Gara, M., Ubriaco, M., Rothbaum, P. A., Clabby, J. F., & Schuyler, T. (1986). The im-
pact of a preventive social problem-solving intervention on children’s coping with middle
school stressors. American Journal of Community Psychology, 14, 259–275.
Elias, M. J., Gara, M. A., Schuyler, T. F., Branden-Muller, L. R., & Sayette, M. A. (1991). The pro-
motion of social competence: Longitudinal study of a preventive school-based program.
American Journal of Orthopsychiatry, 61, 409–417.
Elias, M. J., Zins, J. E., Weissberg, K. S., Greenberg, M. T., Haynes, N. M., Kessler, R.,
Schwab-Stone, M. E., & Shriver, T. P. (1997). Promoting social and emotional learning: Guide-
214 DOMITROVICH AND GREENBERG

lines for educators. Alexandria, VA: Association for Supervision and Curriculum Develop-
ment.
Elliot, D. (1998). Blueprints for violence prevention. Golden, CO: Venture.
Eveland, J. D., Rogers, E., & Klepper, C. (1977). The innovation process in public organizations:
Some elements of a preliminary model. Springfield, VA: NTIS.
Fullan, M. G., & Stiegelbauer, S. (1991). The new meaning of educational change (2nd ed.). New
York: Teachers College Press.
Gager, P. J., & Elias, M. J. (1997). Implementing prevention programs in high-risk environments:
Application of the resiliency paradigm. American Journal of Orthopsychiatry, 67, 363–373.
Gottfredson, D. C., Gottfredson, G. D., & Hybl, L. G. (1993). Managing adolescent behavior: A
Downloaded by [North Dakota State University] at 07:59 12 October 2014

multiyear, multischool study. American Educational Research Journal, 30, 179–216.


Greenberg, M. T., Domitrovich, C., & Bumbarger, B. (1999). Preventing mental disorder in
school-aged children: A review of the effectiveness of prevention programs. Report submitted to
The Center for Mental Health Services (SAMHSA), Prevention Research Center, Pennsyl-
vania State University (obtainable at http://www.psu.edu/dept/prevention/).
Greenberg, M. T., Domitrovich, C., & Graczyk, P. A. (2000). The study of implementation in
school-based prevention research: Theory, research, and practice. Report submitted to The Cen-
ter for Mental Health Services (SAMHSA).
Greenberg, M. T., & Kusche, C. A. (1997, April). Improving children’s emotion regulation and social
competence: The effects of the PATHS curriculum. Paper presented at meeting of Society for
Research in Child Development, Washington, DC.
Greenberg, M. T., & Kusche, C. A. (1998a). Preventive intervention for school-aged deaf chil-
dren: The PATHS curriculum. Journal of Deaf Studies and Deaf Education, 3, 49–63.
Greenberg, M. T., & Kusche, C. A. (1998b). Promoting social competence and preventing maladjust-
ment in school-aged children: The effects of the PATHS curriculum. Manuscript submitted for
publication.
Greenberg, M. T., Kusche, C. A., Cook, E. T., & Quamma, J. P. (1995). Promoting emotional
competence in school-aged deaf children: The effects of the PATHS curriculum. Develop-
ment and Psychopathology, 7, 117–136.
Greenberg, M. T., Zins, J. E., Elias, M. J., & Weissberg, R. P. (in press). School-based prevention:
Promoting positive youth development through social and emotional learning. American
Psychologist.
Gresham, F. M. (1989) Assessment of treatment integrity in school consultation and
prereferral intervention. School Psychology Review, 18, 37–50.
Gresham, F. M., Gansle, K. A., Noell, G. H., Cohen, S., & Rosenblum, S. (1993). Treatment in-
tegrity of school-based behavioral intervention studies: 1980–1990. School Psychology Re-
view, 22, 254–272.
Hall, G. E., & Hord, S. M. (1987). Change in school: Facilitating the process. Albany: State Univer-
sity of New York Press.
Hansen, W. B., Graham, J. W., Wolkenstein, B. H., & Lundy, B. Z. (1989). Differential impact of
three alcohol prevention curricula on hypothesized mediating variables. Journal of Drug
Education, 18, 143–153
Hawkins, J. D., Catalano, R. F., Morrison, D., O’Donnell, J., Abbott, R., & Day, L. (1992). The Se-
attle Social Development Project: Effects of the first four years on protective factors and
problem behaviors. In J. McCord & R. Tremblay (Eds.), The prevention of antisocial behavior in
children (pp. 139–161). New York: Guilford.
Hawkins, J. D., Von Cleve, E., & Catalano, R. F. (1991). Reducing early childhood aggression:
Results of a primary prevention program. Journal of the American Academy of Child and Ado-
lescent Psychiatry, 30, 208–217.
SCHOOL-BASED IMPLEMENTATION 215

Institute of Medicine. (1994). Reducing risks for mental disorders: Frontiers for preventive interven-
tion research. Washington, DC: National Academy Press.
Kerr, D. M., Kent, L., & Lam, T. (1985). Measuring program implementation with a classroom
observation instrument: The Interactive Teaching Map. Evaluation Review, 9, 461– 482.
Meyer, A., Miller, S., & Herman, M. (1993). Balancing the priorities of evaluation with the pri-
orities of the setting: A focus on positive youth development programs in school settings.
Journal of Primary Prevention, 14, 95–113.
Moncher, F. J., & Prinz, R. J. (1991). Treatment fidelity in outcome studies. Clinical Psychology
Review, 11, 247–266.
National Institute of Mental Health. (1998). Priorities for prevention research (NIH Publication
Downloaded by [North Dakota State University] at 07:59 12 October 2014

No. 98-4321). Washington, DC: Department of Health and Human Services.


National Institute on Drug Abuse. (1997). Preventing drug use among children and adolescents
(NIH Publication No. 97-4212). Washington, DC: Author.
O’Donnell, J. O., Hawkins, D., Catalano, R. F., Abbott, R. D., & Day, E. (1995). Preventing
school failure, drug use, and delinquency among low-income children: Long-term inter-
vention in elementary schools. American Journal of Orthopsychiatry, 65, 87–100.
Olds, D., Robinson, J., Song, N., Little, C., & Hill, P. (1999). Reducing the risks for mental disorders
during the first five years of life: A review of preventive interventions. Report submitted to The
Center for Mental Health Services (SAMHSA), Prevention Research Center for Family and
Child Health, University of Colorado Health Sciences Center.
Pentz, M. A., Trebow, E. A., & Hansen, W. B., MacKinnon, D. P., Dwyer, J. H., Flay, B. R., Daniels,
S., Cormack, C., & Johnson, C. A. (1990). Effects of program implementation on adolescent
drug use behavior: The Midwestern Prevention Project. Evaluation Review, 14, 264–289.
Rogers, E. M. (1978). Reinvention during the innovation process. In M. Radnor (Ed.), The diffu-
sion of innovations: An assessment (Contract No. PRA-7680388). Washington, DC: National
Science Foundation.
Rogers, E. M. (1995). Diffusions of innovations. New York: Free Press.
Rohrbach, L. A., Graham, J. W., & Hansen, W. B. (1993). Diffusion of a school-based substance
abuse prevention program: Predictors of program implementation. Preventive Medicine, 22,
237–260.
Ross, J. G., Luepuer, R. V., Nelson, G. D., Saavedra, P., & Hubbard, B. M. (1991). Teenage
health teaching modules: Impact of teacher training on implementation and student out-
comes. Journal of School Health, 61, 31–34.
Scanlon, J. W., Horst, P., Nay, J., Schmidt, R. E., & Waller, A. E. (1977). Evaluability assess-
ment: Avoiding type III and IV errors. In G. R. Gilbert & P. J. Conklin (Eds.), Evaluation
management: A source book of readings (pp. 71–90). Charlottesville, VA: U.S. Civil Service
Commission.
Scheirer, M. A. (1987). Program theory and implementation theory: Implications for evalua-
tors. In L. Bickman (Ed.), Using program theory in evaluation (pp. 59–76). San Francisco:
Jossey-Bass.
Scheirer, M. A. (1994). Process evaluation. In J. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.),
Handbook of practical program evaluation (pp. 40–67). San Francisco: Jossey-Bass.
Sobol, D. F., Rohrbach, L. A., Dent, C. W., Gleason, L., Brannon, B. R., Johnson, C. A., & Flay, B.
R.(1989). The integrity of smoking prevention curriculum delivery. Health Education Re-
search, 4, 59–67.
Taggart, V. S., Bush, P. J., Zuckerman, A. E., & Theiss, P. K. (1990). A process evaluation of the
District of Columbia “know your body” project. Journal of School Health, 60, 60–66.
Tobler Research Associates, LLC. (1988). School-based drug prevention programs: Technical report.
Washington, DC: National Committee for Abuse Prevention.
216 DOMITROVICH AND GREENBERG

Tricker, R., & Davis, L. G. (1988). Implementing drug education in schools: An analysis of the
costs and teacher perceptions. Journal of School Health, 58, 181–185.
Watson, M., Battistich, V., & Solomon, D. (1997). Enhancing students’ social and ethical devel-
opment in schools: An intervention program and its effects. International Journal of Educa-
tional Research, 27, 571–586.
Weissberg, R. P. (1990). Fidelity and adaptation: Combining the best of two perspectives. In P.
Tolan, C. Keys, F. Chertok, & L. Jason (Eds.), Researching community psychology: Issues of the-
ories and methods (pp. 186–190). Washington, DC: American Psychological Association.
Weissberg, R. P., & Greenberg, M. T. (1998). School and community competence-enhancement
and prevention programs. In I. Siegel & A. Renninger (Eds.), Handbook for child psychology,
Downloaded by [North Dakota State University] at 07:59 12 October 2014

Vol. 4: Child psychology in practice (5th ed.). New York: Wiley.


Yeaton, W. H., & Sechrest, L. (1981). Critical dimensions in the choice and maintenance of suc-
cessful treatments: Strength, integrity, and effectiveness. Journal of Consulting & Clinical
Psychology, 49, 156–167.

PROGRAM REFERENCES

Alpert-Gillis, L. J., Pedro-Carroll, J., & Cowen, E. L. (1989). The Children of Divorce Interven-
tion Program: Development, implementation, and evaluation of a program for young ur-
ban children. Journal of Consulting and Clinical Psychology, 57, 583–589.
Andrews, D. W., Solomon, L. H., & Dishion, T. J. (1995). The Adolescent Transition Program: A
school-based program for high-risk teens and their parents. Education & Treatment of
Children, 18, 478–498.
Battistich, V., Schaps, E., Watson, M., & Solomon, D. (1996). Prevention effects of the Child De-
velopment Project: Early findings from an ongoing multi-site demonstration trial. Journal
of Adolescent Research, 11, 12–35.
Bruene-Butler, L., Hampson, J., Elias, M., Clabby, J., & Schuyler, T. (1997). The Improving So-
cial Awareness-Social Problem Solving Project. In G. W. Albee & T. P. Gullotta (Eds.), Pri-
mary prevention works (pp. 239–267). Thousand Oaks, CA: Sage.
Caplan, M., Weissberg, R. P., Grober, J. S., Sivo, P. J., Grady, K., & Jacoby, C. (1992). Social
competence promotion with inner city and suburban young adolescents: Effects on
school adjustment and alcohol use. Journal of Consulting and Clinical Psychology, 60, 56–63.
Clarke, G. N., Hawkins, W., Murphy, M, Sheeber, L. B., Lewinsohn, P. M., & Seeley, J. R. (1995).
Targeted prevention of unipolar depressive disorder in an at-risk sample of high school
adolescents: A randomized trial of a group cognitive intervention. Journal of the American
Academy of Child and Adolescent Psychiatry, 34, 312–321.
Conduct Problems Prevention Research Group. (1992). A developmental and clinical model
for the prevention of conduct disorders: The FAST Track Program. Development and
Psychopathology, 4, 509–527.
Conduct Problems Prevention Research Group (1998, August). Results of the Fast Track Preven-
tion Project: Grade 3 outcomes. Paper presented at the American Psychological Association
meeting, San Francisco.
Conduct Problems Prevention Research Group. (1999a). Initial impact of the Fast Track Pre-
vention trial for conduct problems: I. The high-risk sample. Journal of Consulting and Clini-
cal Psychology, 67, 631–647.
Conduct Problems Prevention Research Group. (1999b). Initial impact of the Fast Track Pre-
vention trail for conduct problems: II. Classroom effect. Journal of Consulting and Clinical
Psychology, 67, 648–657.
SCHOOL-BASED IMPLEMENTATION 217

Cowen, E. L., Gesten, E. L., & Wilson, A. B. (1979). The primary mental health project (PMHP):
Evaluation of current program effectiveness. American Journal of Community Psychology, 7,
293–303.
Cowen, E. L., Hightower, A. D., Pedro-Carroll, J. L., Work, W. C., Wyman, P. A., & Haffey, W.
G. (1996). School-based prevention for children at risk: The Primary Mental Health Project. Wash-
ington, DC: American Psychological Association.
Dadds, M. R., Holland, D. E., Laurens, K. R., Mullins, M., Barrett, P. M., & Spence, S. H. (1999).
Early intervention and prevention of anxiety disorders in children: Results at 2-year fol-
low-up. Journal of Consulting and Clinical Psychology, 67, 145–150.
Dadds, M. R., Spence, S. H., Holland, D. E., Barrett, P. M., & Laurens, K. R. (1997). Prevention
Downloaded by [North Dakota State University] at 07:59 12 October 2014

and early intervention for anxiety disorders: A controlled trial. Journal of Consulting and
Clinical Psychology, 65, 627–635.
Dishion, T. J., & Andrews, D. W. (1995). Preventing escalation in problem behaviors with
high-risk young adolescents: Immediate and 1-year outcomes. Journal of Consulting and
Clinical Psychology, 63, 538–548.
Dishion, T. J., Andrews, D. W., Kavanagh, K., & Soberman, L. H. (1996). Preventive interven-
tions for high-risk youth: The adolescent transitions program. In R. DeV. Peters & R. J.
McMahon (Eds.), Preventing childhood disorders, substance abuse and delinquency (pp. 184–
214). Thousand Oaks, CA: Sage.
Dolan, L. J., Kellam, S. G., Brown, C. H., Werthamer-Larson, L., Rebok, G. W., Mayer, L. S.,
Laudoff, J., Turkkan, J., Ford, C., & Wheeler, L. (1993). The short-term impact of two class-
room-based preventive interventions on aggressive and shy behaviors and poor achieve-
ment. Journal of Applied Developmental Psychology, 14, 317–345.
Elias, M. J, Gara, M. A, Schuyler, T. F, Branden-Muller, L. R., & Sayette, M. A. (1991). The pro-
motion of social competence: Longitudinal study of a preventive school-based program.
American Journal of Orthopsychiatry, 61, 409–417.
Elias, M. J., Gara, M. A., Ubriaco, M., Rothbaum, P., Clabby, J., & Schuyler, T. (1986). Impact of
a preventive social problem solving intervention on children’s coping with middle school
stressors. American Journal of Community Psychology, 14, 259–275.
Farrell, A. D., Meyer, A. L., & White, K. S. (1998). Evaluation of Responding in Peaceful and Positive
Ways (RIPP): A school-based prevention program for reducing violence among urban adolescents.
Manuscript submitted for publication.
Felner, R. D., & Adan, A. M. (1988). The school transitional project: An ecological intervention
and evaluation. In R. H. Price, E. L. Cowen, R. P. Lorion, & J. Ramos-McKay (Eds.), 14
ounces of prevention: A casebook for practitioners (pp. 111–122). Washington, DC: American
Psychological Association.
Felner, R. D., Brand, S., Adan, A. M., Mulhall, P. F., Flowers, N., Sartain, B., & DuBois, D. L.
(1993). Restructuring the ecology of the school as an approach to prevention during school
transitions: Longitudinal follow-ups and extensions of the School Transitional Environ-
ment Project (STEP). Prevention in Human Services, 10, 103–136.
Felner, R. D., Ginter, M., & Primavera, J. (1982). Primary prevention during school transitions: Social
support and environmental structure. American Journal of Community Psychology, 10, 277–290.
Gillham, J. E., Reivich, K. J., Jaycox, L. H., & Seligman, M. E. P. (1995). Prevention of depressive
symptoms in schoolchildren: Two-year follow-up. Psychological Science, 6, 343–351.
Greenberg, M. T., & Kusche, C. A. (1993). Promoting social and emotional development in deaf chil-
dren: the PATHS project. Seattle: University of Washington Press.
Greenberg, M. T., & Kusche, C. A. (1996). The PATHS project: Preventive intervention for children.
Final Report to the National Institute of Mental Health, Grant number R01MH42131.
218 DOMITROVICH AND GREENBERG

Greenberg, M. T., & Kusche, C. A. (1997, April). Improving children’s emotion regulation and social
competence: The effects of the PATHS curriculum. Paper presented at meeting of Society for
Research in Child Development, Washington, DC.
Greenberg, M. T., & Kusche, C. A. (1998a). Preventive intervention for school-aged deaf chil-
dren: The PATHS curriculum. Journal of Deaf Studies and Deaf Education, 3, 49–63.
Greenberg, M. T., & Kusche, C. A. (1998b). Promoting social competence and preventing maladjustment in
school-aged children: The effects of the PATHS curriculum. Manuscript submitted for publication.
Greenberg, M. T., Kusche, C. A., Cook, E. T., & Quamma, J. P. (1995). Promoting emotional
competence in school-aged deaf children: The effects of the PATHS curriculum. Develop-
ment and Psychopathology, 7, 117–136.
Downloaded by [North Dakota State University] at 07:59 12 October 2014

Grossman, D. C., Neckerman, H. J., Koepsell, T. D., Liu, P., Asher, K. N., Beland, K., Frey, K., &
Rivera, F. P. (1997). Effectiveness of a violence prevention curriculum among children in
elementary school. Journal of the American Medical Association, 277, 1605–1611.
Grossman , J. B., & Tierney, J. P. (1998). Does mentoring work? An impact study of the big
brothers big sister program. Evaluation Review, 22, 403–426.
Hains, A. A., & Szyjauowski, M. (1990). A cognitive stress-reduction intervention program for
adolescents. Journal of Counseling Psychology, 37, 79–84.
Hawkins, J. D., Catalano, R. F., Kosterman, R., Abbott, R., & Hill, K. (in press). Preventing ado-
lescent health-risk behaviors by strengthening protection during childhood. Archives of Pe-
diatrics and Adolescent Medicine.
Hawkins, J., Catalano, R., Morrison, D., O’Donnell, J., Abbott, R., & Day, L. (1992). The Seattle
Social Development Project: Effects of the first four years on protective factors and prob-
lem behaviors. In J. McCord & R. Tremblay (Eds.), The prevention of antisocial behavior in chil-
dren (pp. 139–161). New York: Guilford.
Hawkins, J. D., Von Cleve, E., & Catalano, R. F. (1991). Reducing early childhood aggression:
Results of a primary prevention program. Journal of the American Academy of Child and Ado-
lescent Psychiatry, 30, 208–217.
Hightower, A. D. (1997). Primary Mental Health Project. In G. W. Albee & T. P. Gullotta (Eds.),
Primary prevention works (191–212). Thousand Oaks, CA: Sage.
Hudley, C., & Graham, S. (1993). An attributional intervention to reduce peer-directed aggres-
sion among African-American boys. Child Development, 64, 124–138.
Hudley, C., & Graham, S. (1995). School-based interventions for aggressive African- American
boys. Applied & Preventive Psychology, 4, 185–195.
Irvine, A. B., Biglan, A., Smolkowski, K., Metzler, C. W., & Ary, D. V. (1999). The effectiveness
of a parenting skills program for parents of middle school students in small communities.
Journal of Consulting and Clinical Psychology, 67, 811–825.
Jaycox, L. H, Reivich, K. J., Gillham, J., & Seligman, M. (1994). Prevention of depressive symp-
toms in school children. Behaviour Research & Therapy, 32, 801–816.
Kellam, S. G., Ling, X., Merisca, R., Brown, C. H., & Ialongo, N. (1998). The effect of the level of
aggression in the first grade classroom on the course and malleability of aggressive behav-
ior into middle school. Development & Psychopathology, 10, 165–185.
Kellam, S. G., & Rebok, G. W. (1992). Building developmental and etiological theory through
epidemiologically based preventive intervention trials. In J. McCord & R. E. Tremblay
(Eds.), Preventing antisocial behavior: Interventions from birth through adolescence (pp. 162–
194). New York: Guilford.
Kellam, S. G., Rebok, G. W., Ialongo, N., & Mayer, L. S (1994). The course and malleability of
aggressive behavior from early first grade into middle school: Results of a developmental
epidemiologically based preventive trial. Journal of Child Psychology and Psychiatry, 35,
259–281.
SCHOOL-BASED IMPLEMENTATION 219

Kiselica, M. S., Baker, S. B., Thomas, R. N., & Reedy, S. (1994). Effects of stress inoculation train-
ing on anxiety, stress, and academic performance among adolescents. Journal of Counseling
Psychology, 41, 335–342.
Klingman, A., & Hochdorf, Z. (1993). Coping with distress and self-harm: The impact of a pri-
mary prevention program among adolescents. Journal of Adolescence, 16, 121–140.
Lochman, J. E. (1985). Effects of different treatment lengths in cognitive-behavioral interven-
tions with aggressive boys. Child Psychiatry and Human Development, 16, 45–56.
Lochman, J. E. (1992). Cognitive-behavioral intervention with aggressive boys: Three-
year follow-up and preventive efforts. Journal of Consulting and Clinical Psychology, 60,
426– 432.
Downloaded by [North Dakota State University] at 07:59 12 October 2014

Lochman, J. E., Burch, P. R., Curry, J. F., & Lampron, L. B. (1984). Treatment and generalization
effects of cognitive-behavioral and goal-setting interventions with aggressive boys. Journal
of Consulting and Clinical Psychology, 52, 915–916.
Lochman, J. E., Coie, J. D., Underwood, M. K, & Terry, R. (1993). Effectiveness of a social rela-
tions intervention program for aggressive and non-aggressive, rejected children. Journal of
Consulting and Clinical Psychology, 61, 1053–1058.
Lochman, J. E., & Lampron, L. B. (1988). Cognitive behavioral interventions for aggressive boys:
Seven months follow-up effects. Journal of Child and Adodescent Psychotherapy, 5, 15–23.
Lochman, J. E., Lampron, L. B., Gemmer, T. C., Harris, S. R., & Wyckoff, G. M. (1989). Teacher
consultation and cognitive-behavioral intervention with aggressive boys. Psychology in the
Schools, 26, 179–188.
Lochman, J. E., & Wells, K. C. (1996). A social-cognitive intervention with aggressive children:
Prevention effects and contextual implementation issues. In. R. DeV. Peters & R. J.
McMahon (Eds.), Preventing childhood disorders, substance abuse and delinquency (pp. 111–
143). Thousand Oaks, CA: Sage.
Lorion, R. P., Caldwell, R. A., & Cowen, E. L. (1976). Effects of a school mental health project: A
one-year follow-up. Journal of School Psychology, 14, 56–63.
McCord, J., Tremblay, R. E., Vitaro, F., & Desmarais-Gervais, L. (1994). Boys’ disruptive be-
havior, school adjustment, and delinquency: The Montreal Prevention Experiment. Inter-
national Journal of Behavioural Development, 17, 739–752.
O’Donnell, J. O., Hawkins, D., Catalano, R. F., Abbott, R. D., & Day, E. (1995). Preventing
school failure, drug use, and delinquency among low-income children: Long-term inter-
vention in elementary schools. American Journal of Orthopsychiatry, 65, 87–100.
Olweus, D. (1991). Bully/victim problems among school children: Basic facts and effects
of an intervention program. In D. J. Pepler & K. H. Rubin (Eds.), The development and
treatment of childhood aggression (pp. 411–448). Hillsdale, NJ: Lawrence Erlbaum Asso-
ciates, Inc.
Olweus, D. (1993). Bullying at school: What we know and what we can do. Oxford: Blackwell.
Olweus, D. (1994). Annotation: Bullying at school: Basic facts and effects of a school based in-
tervention program. Journal of Child Psychology and Psychiatry, 35, 1171–1190.
Orbach, I., & Bar-Joseph, H. (1993). The impact of a suicide prevention program for adoles-
cents on suicidal tendencies, hopelessness, ego identity, and coping. Suicide and Life-
Threatening Behavior, 23(2), 120–129.
Pedro-Carroll, J. L., Alpert-Gillis, L. J., & Cowen, E. L. (1992). An evaluation of the efficacy of a
preventive intervention for 4th–6th grade urban children of divorce. Journal of Primary Pre-
vention, 13, 115–130.
Pedro-Carroll, J. L., & Cowen, E. L. (1985). The Children of Divorce Intervention Program: An
investigation of the efficacy of a school-based prevention program. Journal of Consulting
and Clinical Psychology, 53, 603–611.
220 DOMITROVICH AND GREENBERG

Pepler, D. J., King, G., Craig, W., Byrd, B., & Bream, L. (1995). The development and evaluation
of a multisystem social skills group training programs for aggressive children. Child &
Youth Care Forum, 24, 297–313.
Prinz, R. J., Blechman, E. A., & Dumas, J. E. (1994). An evaluation of peer coping-skills training
for childhood aggression. Journal of Clinical Child Psychology, 23, 193–203.
Randell, B. P., Eggert, L. L., & Pike, K. C. (in press). Immediate post-intervention effects of two
brief youth suicide prevention interventions. Suicide and Life-Threatening Behavior.
Reid, J. B., Eddy, J. M., Fetrow, R. A., & Stoolmiller, M. (1999). Description and immediate im-
pacts of a preventive intervention for conduct problems. American Journal of Community
Psychology, 27, 483–517.
Downloaded by [North Dakota State University] at 07:59 12 October 2014

Sandler, I. N., West, S. G., Baca, L., Pillow, D. R., Gersten, J. C., Rogosch, F., Virdin, L., Beals, J.,
Reynolds, K. D., Kallgren, C., Tein, J., Kriege, G., Cole, E., & Ramirez, R. (1992). Linking em-
pirically based theory and evaluation: The Family Bereavement Program. American Journal
of Community Psychology, 20, 491–521.
Shure, M. B. (1979). Training children to solve interpersonal problems: A preventive mental
health program. In R. F. Munoz, L. R. Snowden, & J. G. Kelly (Eds.), Social and psychological
research in community centers (pp.30–68). San Francisco: Jossey-Bass.
Shure, M. B. (1988). How to think not what to think: A cognitive approach to prevention. In L.
A. Bond & B. M. Wagner (Eds.), Families in transition: Primary prevention programs that work
(pp.170–199). Newbury Park, CA: Sage.
Shure, M. B. (1997). Interpersonal cognitive problem solving: Primary prevention of early
high-risk behaviors in the preschool and primary years. In G. W. Albee & T. P. Gullotta
(Eds.), Primary prevention works (pp.167–188). Thousand Oaks, CA: Sage.
Shure, M. B., & Spivack, G. (1982). Interpersonal problem solving in young children: A cogni-
tive approach to prevention. American Journal of Community Psychology, 10, 341–356.
Shure, M. B., & Spivack, G. (1988). Interpersonal Cognitive Problem Solving. In R. H. Price, E.
L. Cowen, R. P. Lorion, & J. Ramos-McKay (Eds.), Fourteen ounces of prevention: A casebook
for practitioners (pp. 69–82). Washington, DC: American Psychological Association.
Solomon, D., Watson, M., Battistich, V., Schaps, E., & Delucchi, K. (1996). Creating classrooms
that students experience as communities. American Journal of Community Psychology, 24,
719–748.
Solomon, D., Watson, M., Delucchi, K., Schaps, E., & Battistich, V. (1988). Enhancing children’s
prosocial behavior in the classroom. American Educational Research Journal, 25, 527–554.
Tierney, J. P., Grossman, J. B., & Resch, N. L. (1995). Making a difference: An impact study of Big
Brothers/Big Sisters. Philadelphia, PA: Public/Private Ventures.
Tremblay, R. E., Masse, L. C., Pagani, L., & Vitaro, F. (1996). From childhood aggression to ado-
lescent maladjustment: The Montreal Prevention Experiement. In. R. DeV. Peters & R. J.
McMahon (Eds.), Preventing childhood disorders, substance abuse and delinquency (pp.
268–298). Thousand Oaks, CA: Sage.
Tremblay, R. E., Masse, B., Perron, D., LeBlanc, M., Schwartzman, A. E., & Ledingham, J. E.
(1992). Early disruptive behavior, poor school achievement, delinquent behavior, and de-
linquent personality: Longitudinal analyses. Journal of Consulting and Clinical Psychology,
60, 64–72.
Tremblay, R. E., Vitaro, F., Bertrand, L., LeBlanc, M., Beauchesne, H., Boileau, H., David, L.
(1992). Parent and child training to prevent early onset of delinquency: The Montreal Lon-
gitudinal-Experimental study. In J. McCord & R. E. Tremblay (Eds.), Preventing antisocial
behavior: Interventions from birth through adolescence (pp. 117–138). New York: Guilford.
Vitaro, F., & Tremblay, R. E. (1994). Impact of a prevention program on aggressive children’s
friendships and social adjustment. Journal of Abnormal Child Psychology, 22, 457– 475.
SCHOOL-BASED IMPLEMENTATION 221

Walker, H., Kavanagh, K., Stiller, B., Golly, A., Steverson, H. H., & Feil, E. G. (1998). First step
to success: An early intervention approach for preventing school antisocial behavior. Jour-
nal of Emotional and Behavioral Disorders, 6, 66–80.
Walker, H., Stiller, B., Severson, H. H., Feil, E. G., & Golly, A. (1998). First step to success: Inter-
vening at the point of school entry to prevent antisocial behavior patterns. Psychology in the
Schools, 35, 259–269.
Watson, M., Battistich, V., & Solomon, D. (1997). Enhancing students’ social and ethical devel-
opment in schools: An intervention program and its effects. International Journal of Educa-
tional Research, 27, 571–586.
Weissberg, R. P., Barton, H. A., & Shriver, T. P. (1997). The Social Competence Promotion Pro-
Downloaded by [North Dakota State University] at 07:59 12 October 2014

gram for young adolescents. In G. W. Albee & T. P. Gullotta (Eds.), Primary prevention works
(pp. 268–290). Thousand Oaks, CA: Sage.
Weissberg, R. P., Cowen, E. L., Lotyczewski, B. S. & Gesien, E. L. (1983). The primary mental
health project: Seven consecutive years of program outcome research. Journal of Consulting
and Clinical Psychology, 51, 100–107.
Wolchik, S. A., West, S. G., Westover, S., Sandler, I. N., Martin, A., Lustig, J., Tein, J., & Fisher, J.
(1993). The Children of Divorce Parenting Intervention: Outcome evaluation of an empiri-
cally based program. American Journal of Community Psychology, 21, 293–331.

Celene E. Domitrovich is a Child Clinical Psychologist and the Assistant Director of the
Pennsylvania State Prevention Research Center. Her research interests include developing
preventive interventions that reduce mental health disorders in young children and the role of
social relationships in development.

Mark T. Greenberg is the Bennett Chair for Prevention Research at Pennsylvania State Uni-
versity and the Director of the Pennsylvania State Prevention Research Center. Dr. Greenberg
is interested in the promotion of social emotional learning in schools and has researched the
ways in which social-emotional skills protect children from poor outcomes and promote posi-
tive adjustment.

You might also like