You are on page 1of 25

This article was downloaded by: [Universidad de Sevilla]

On: 26 April 2010


Access details: Access Details: [subscription number 773444416]
Publisher Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-
41 Mortimer Street, London W1T 3JH, UK

School Effectiveness and School Improvement


Publication details, including instructions for authors and subscription information:
http://www.informaworld.com/smpp/title~content=t714592801

The development and testing of a school improvement model


Kenneth Leithwood a; Doris Jantzi a;Charryn McElheron-Hopkins a
a
OISE/University of Toronto, Canada

To cite this Article Leithwood, Kenneth , Jantzi, Doris andMcElheron-Hopkins, Charryn(2006) 'The development and
testing of a school improvement model', School Effectiveness and School Improvement, 17: 4, 441 — 464
To link to this Article: DOI: 10.1080/09243450600743533
URL: http://dx.doi.org/10.1080/09243450600743533

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf

This article may be used for research, teaching and private study purposes. Any substantial or
systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or
distribution in any form to anyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representation that the contents
will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses
should be independently verified with primary sources. The publisher shall not be liable for any loss,
actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly
or indirectly in connection with or arising out of the use of this material.
School Effectiveness and School Improvement
Vol. 17, No. 4, December 2006, pp. 441 – 464

The Development and Testing


of a School Improvement Model
Kenneth Leithwood*, Doris Jantzi, and
Charryn McElheron-Hopkins
OISE/University of Toronto, Canada
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

This multimethod study generated and tested a ‘‘best evidence’’ model of school improvement
processes (SIP) capable of improving student achievement. Initially developed through the review
of a comprehensive body of previous empirical research, the model was further refined through a 2,
5-year longitudinal study in 10 schools. A quantitative test of this refined model was then conducted
using survey evidence from administrators, teachers, parents, and students in 100 elementary
schools. The model as a whole explained modest but significant amounts of variation in student
achievement across schools. School leadership and SIP implementation processes accounted for the
largest proportion of explained variation.

Introduction
Many schools labeled ‘‘failing’’ or ‘‘low performing,’’ in today’s accountability
context, are serving highly diverse student populations—diverse cultures, languages,
religions, and economic circumstances. Such diversity is challenging on two fronts—
the sheer range of educational needs that schools must take into account, and the
presence of significant numbers of children whose needs typically exceed the
capacities of many schools to adequately address. Because schools in most
jurisdictions are now being held accountable for teaching all students to the same
high standards, finding ways of addressing both types of challenges has become an
urgent matter.
In this paper, we develop a ‘‘best evidence’’ model of school improvement
processes in schools serving diverse student populations and test its effects on student
achievement in a broad sample of schools. We do this because school improvement
planning (SIP), and action guided by such planning, is arguably the most common

*Corresponding author. OISE/University of Toronto, 252 Bloor St. West, Toronto, Ontario M5S
1V6, Canada. Email: kleithwood@oise.utoronto.ca
ISSN 0924-3453 (print)/ISSN 1744-5124 (online)/06/040441–24
Ó 2006 Taylor & Francis
DOI: 10.1080/09243450600743533
442 K. Leithwood et al.

response to low performance and to the meeting of assigned achievement targets; this
is the case in spite of meager evidence about its effects, a matter we elaborate on
below. Sometimes SIP is invoked in combination with other strategies and
sometimes by itself. Widely discussed alternatives (or adjuncts) to SIP include the
creation of markets in order to increase competition among schools (e.g., Raywid,
1992), restructuring schools through various forms of school-based management
(Whitty, Power, & Halpin, 1998), standards setting (e.g., Feuerstein & Dietrich,
2003), and whole school reform (Herman, 1999). Evidence about the effects of these
alternatives on student achievement is most impressive for whole school reform (e.g.,
Herman, 1999). The effects of standard setting have not been subject to sufficient
evaluation to assess impact. Several other alternatives for which there is ample
evidence have failed to live up to their initial promise (for school-based management,
see Leithwood & Menzies, 2000; and for the creation of markets, see Lauder &
Hughes, 1999).
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

The research reported in this paper was part of a larger study of school
improvement processes with special emphasis on parents roles in school improve-
ment planning (Leithwood, Jantzi, McElheron-Hopkins, 2005).1 A mixed-methods
research design was used to collect evidence in two distinct phases. The first phase
of our work was ‘‘theory generation’’; we asked ‘‘What do school improvement
processes consist of in their most powerful form?’’ Theory testing was the purpose
for the second phase of our research; in this phase we asked ‘‘What is the impact on
students and schools of school improvement processes in their most powerful
form?’’

Phase One: Development of a model of school improvement processes


Our method for capturing exemplary school improvement processes was (a) to mine
relevant prior research for best practices and (b) to undertake our own longitudinal
case studies of school improvement processes in 10 schools. These two sources of
evidence together were used to build a multidimensional model of exemplary school
improvement processes, the main outcome of Phase One.

Prior Evidence
Locating the evidence. Our review of prior research aimed at alerting us to key issues,
areas of robust knowledge, and problems concerning school improvement about
which not much was yet known (the full review can be obtained from the authors).
Relevant research was located initially through an online search of the ERIC system.
This search uncovered 29 documents on school development planning and 38 on
school improvement planning: The term ‘‘school development planning’’ is most
frequently used in the United Kingdom and Australia while the term ‘‘school
improvement planning’’ is most often used in Canada and the United States. Forty-
nine of these 67 documents were chosen for review based on their availability and
their relevance to our study. Of these, 33 were empirical studies undertaken in
The Development and Testing of a School Improvement Model 443

Australia, Canada, the United Kingdom (including England, Wales, Ireland, and
Scotland), and the United States.
Reviews of literature about both SIP and school effectiveness also were analyzed.
Reviews reported in The International Handbook of School Effectiveness Research
(Teddlie & Reynolds, 2000) were particularly helpful in providing an international
perspective on the field and further references about factors associated with school
effectiveness.

SIP impact. In spite of its widespread endorsement, our search uncovered a relatively
small amount of evidence concerning the organizational and student outcomes
associated with SIP. About organizational outcomes, McInerney and Leach (1992)
found that positive outcomes outnumbered negative outcomes two to one. The list of
positive outcomes in this study included increased awareness of the school’s
strengths and weaknesses, increased unity of staff, increased communication with
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

parents, increased communication with community, curricula better suited to


student needs, increased openness by teachers to change, and opportunities for staff
development.
Only a few of the studies included in our review inquired about the effects of SIP
on students. None of the empirical studies from the United States provided such
data. As a minor exception to this claim, one U.S. study conducted in 64 high schools
in Indiana, examined teachers’ and principals’ views of the impact of school
improvement planning. While the researchers noted that in 23 schools, achievement
test scores had improved, neither principals or teachers reported such an increase
themselves. Addressing a possible reason for this discrepancy, Flinspach and Ryan
(1992, p. 45) recommended measuring interim indicators of changes in student
learning which eventually may lead to improved test scores (e.g., students reading
more books, finishing more assignments, participating more in class, and improving
their writing skills).
Studies conducted in the United Kingdom, Canada, and Australia provide
additional evidence of SIP’s impact on student learning. MacGilchrist and
Mortimore (1997) examined the impact of school development planning in nine
UK primary schools. This study suggested that the type of improvement plan may be
key to impacts on schools and student learning. Four types of plans were evident in
their research: the rhetorical plan, the singular plan, the cooperative plan, and the
corporate plan. Their impacts ranged from negative to very positive. While the
cooperative plan had some positive outcomes for the school and classroom, its effect
on students was difficult to determine. Only the corporate plan, characterized by a
united effort to improve and a strong sense of shared ownership and involvement by
the teaching staff, had a noticeable effect on student learning. Of the nine schools in
this study, only two had a corporate plan: The value of a corporate or collaborative
approach to SIP is echoed in other research (Broadhead, Hodgson, Cuckle, &
Dunford, 1998; Fullan & Hargreaves, 1991; Giles, 1998).
Reeves (2000) reported research examining SIP in 24 elementary and secondary
schools in Scotland. This study found that elementary schools which produced school
444 K. Leithwood et al.

development plans using ‘‘good practice,’’ had a positive impact on student


attainment. However, the evidence of impact did not hold for secondary schools.
In both elementary and secondary schools, positive attainment was related to capacity
building strategies. Only one of the five Australian studies included in our review
documented positive effects on student achievement. Hatton (2001, p. 130) reported
that school development or strategic planning helped one disadvantaged school work
‘‘towards meeting the educational needs of its client group.’’ Basic Skill test results
had improved for the school’s students who traditionally have had poor results. The
extraordinary personal efforts of the principal and the commitment of staff were
factors in this success. This school followed a style of planning similar to the
‘‘corporate’’ style described by MacGilchrist and Mortimore (1997).
In Canada, the planning process used by the Manitoba School Improvement Project
(MSIP) reported positive impacts on student learning (Earl & Lee, 1998). The style of
planning in the MSIP schools would be described, using the MacGilchrist and
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

Mortimore categories, as collaborative, with wide participation of teachers, parents,


the community, and students, as well as the administration and outside facilitators.
The ‘‘Improving the Quality of Education of All’’ (IQEA) project in the UK also
reported success in increasing student achievement (Harris, 2001; Harris & Young,
2000). Both the MSIP and IQEA projects combined the school development/
improvement planning process with capacity building approaches that involved
stakeholders in visioning and in bringing about organizational change. School
community ownership of the process was a strong component of these projects.
Harris and Young (2000) argue that the success of these projects demonstrates that
successful SIP involves internal and external agency, a focus on specific teaching and
learning goals, commitment to teacher development and professional growth,
devolved leadership, and formative and summative evaluation. Harris and Young
(2000) and Earl and Lee (1998) claim that ‘‘urgency, energy, agency and more
energy’’ are the key ingredients for successful SIP.
There is, then, some limited evidence that school improvement planning and
associated processes may have positive effects on student learning, depending very
much on the form of planning used (some combination of what has been described as
collaborative and corporate) and when key conditions prevail (e.g., Earl & Lee’s,
1998, urgency, energy, and agency). But these studies have little to say about the
extent to which student diversity influences the nature of productive SIP.

School improvement planning processes. Just what are these school improvement
planning processes that have such effects? SIP processes were described in very
similar terms across studies, while the processes used to implement such plans varied
much more. Differences in organizational contexts (e.g., school culture/character-
istics and district/government role), and in the social dimensions of planning (e.g.,
leadership, collaboration, teamwork, communication, and decision-making) may
account for some of the variations in outcomes described in the previous section.
These processes usually are described as linear or cyclical, with several main stages.
Furthermore, engagement in these processes is assumed to be continuous; once the
The Development and Testing of a School Improvement Model 445

final stage in the cycle is reached, the process begins afresh with a focus on problems
discovered during implementation, or as new priorities arise.
Hargreaves and Hopkins (1991, pp. 4 – 5) outline a five-staged improvement
process: getting started; conducting an audit of the school’s strengths and weaknesses;
setting priorities and targets; implementation or putting the plans in place; and
evaluating the success of the plans and their implementation. These stages of SIP
described by Hargreaves and Hopkins are illustrative of the main stages reported in
other literature we reviewed (e.g., Flinspach & Ryan, 1992; Heistad & Spicuzza, 2000;
McBee & Fink, 1989; McInerney & Leach, 1992; Wilson & McPake, 2000).
The first stage in SIP involves activities and decisions leading to the adoption or
beginning of the planning process. In some cases, the decision to engage in school
developmental or improvement planning is mandated by a senior level of
government; this is the case, for example in Australia (Dellar, 1995; Hatton,
2001), the United Kingdom (Giles, 1998), and Chicago (Flinspach & Ryan, 1992).
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

Adoption may also be a choice for schools, as in the Manitoba School Improvement
Program (Earl & Lee, 1998). Communications with stakeholders in the school
community about the planning process is typically part of this stage. In many schools,
a group or several teams are organized to participate in the planning process. Training
in the process of school development/improvement planning may be undertaken.
During the second ‘‘design’’ stage, schools determine what should be included in
their plan by incorporating requirements from district and senior levels of
government with school needs and priorities. They examine their strengths and
weaknesses (sometimes referred to as conducting an audit) using achievement data
and other pertinent information (MacGilchrist & Mortimore, 1997). A plan is
established according to a framework that requires action to be taken over a period of
time, usually 1 to 5 years. During this second design stage, consideration is given to
the school’s mission, its goals, indicators of success, responsibilities for carrying out
actions, the setting for improvement, and the timing.
During the implementation stage, plans are carried out at the classroom and/or
school level. Responsibilities for implementation may be shared by the principal,
teachers, school-based decision-making groups (or improvement teams), and other
stakeholders. Monitoring is sometimes viewed as part of the implementation stage
and is carried out for formative purposes. Monitoring the effects of the plan and the
processes used for its implementation allows schools to see where they are succeeding
or where they may need to make adjustments during the implementation process.
Evaluation is sometimes undertaken by external bodies and/or by the school itself.
It may be a formal requirement; this is the case, for example, in the UK with an
inspection service which judges the failure or success of schools and their
improvement efforts. An external evaluation is also part of the Manitoba School
Improvement Process. Evaluation also may be less formal and limited to school
personnel discussing progress towards goals as they have experienced it. Reporting on
the results of the planning process within the school community or beyond to districts
or governments is also a feature of the evaluation phase in some settings
(MacGilchrist & Mortimore, 1997). This serves both formative and summative
446 K. Leithwood et al.

purposes. Even in such cases, however, SIP is meant to be a continuous process so


that the results of evaluation inform future plans and directions.
In addition to the stages typically associated with SIP, our review uncovered
considerable amounts of evidence of the factors determining the outcomes of school
improvement processes, for example, the role of the principal, teacher teams, district
support, and the like. These factors are addressed more comprehensively later in the
paper.

Longitudinal Case Studies in 10 Schools


To supplement the existing literature in our effort to model the most powerful forms
of SIP, we conducted a 3-year qualitative study of school improvement planning
processes in 10 schools.
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

Sample. Case study schools were located almost equally in public and Catholic school
districts. These were districts mostly in the southern and central part of the province,
but spread widely from east to west; one district was in the north. Schools were loca-
ted, in almost equal numbers, in urban, suburban, and rural locations (see Table 1).
Schools’ sizes ranged from a high of 850 students to a low of approximately 240 with
a mean size of about 400 students. Eight of the schools were elementary, usually JK to
8. Most were serving a high proportion of relatively needy students from lower
income families. Two schools served largely francophone populations, and one
school a predominantly Portuguese population. Physical facilities were generally
described as well maintained and several were relatively new. Provincial achievement
evidence for math and literacy in grades 3 and 6 indicated that all but one of the
schools were scoring below the provincial average making them prime candidates
within their districts for relatively aggressive school improvement initiatives.

Table 1. Case school demographic information

Achievement
Case Location Size Level Language Type (Gr. 3/Gr. 6)*

A Urban 460 JK-8 English Public 34.3 37.0


B Suburban 531 JK-8 English Catholic 93.0 71.0
C Urban 740 JK-8 English Public 47.0 39.0
D Rural 240 JK-8 English Catholic NA** NA**
E Urban 280 JK-8 English Public 52.0 52.7
F Urban 270 JK-8 English Catholic 36.3 69.7
G Urban 370 JK-6 French Public 33.7 33.3
H Suburban 300 JK-3 French Catholic 30.7 –
I Suburban 750 JK-8 English Public 62.7 65.7
J Urban 850 JK-6 English Public 54.0 51.0

*Mean percentage of students performing at Levels 3 and 4 on provincial tests in reading, writing,
and mathematics, for which 2000 – 2001 provincial means were 54% for both grades.
**The province does not post scores for schools as small as this one.
The Development and Testing of a School Improvement Model 447

Data collection. In each school, interview data were collected from members of the
school council (administrators, teachers, and parents), as well as a small number of
parents and teachers who were not members of the council. Between 5 and 12 people
were interviewed in each school on four separate occasions roughly evenly spaced
over 3 years. The same interview protocols were used in each school, but the nature of
the interviews changed from one data collection period to the next in order to track
changes in the schools. Researchers took detailed notes during all interviews and
ensured, as needed, the accuracy and completeness of these notes by reviewing audio-
taped records made of each interview. Interview and documentary evidence were
analyzed for each school separately, first. Results were then aggregated across the 10
schools as part of our model-building process.

Results
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

A synthesis of prior research and the results of our aggregated 10 case studies were
used to construct a framework or model of school improvement planning processes.
Framework construction entailed, first, identifying the ‘‘factors’’ which our data
indicated were especially prominent in our case study schools and, second,
combining those factors with the results of our review of prior research.
Figure 1 is an overview of the framework resulting from these two sources of data.
It consists of a set of factors or variables and a general indication of how they are
related to one another. Variables 1 to 4, the core variables, are temporally related. A
set of processes (variable 1) initiates planning and eventually results usually in a
written plan with contents unique to the school and the processes that it has used
(variable 2). The plan has goals to be achieved and they have an intended influence
on the eventual outcomes of the process (variable 4). But activities undertaken to
accomplish those goals—or implement the plan (variable 3)—produce other
unplanned outcomes, as well.
At the top of Figure 1 are two variables which our data suggest are critical
determinants of the trajectory of the SIP processes, as well as the outcomes. These
variables, monitoring (variable 5) and communication (variable 6) may be carried out
in a variety of ways, and more or less well, but are not necessarily the responsibility of
any single person or group. At the bottom of Figure 1 are five additional variables
each consisting of a set of tasks undertaken by those in specific roles or positions.
Interactions among these variables occur as part of school improvement planning and
implementation processes.

Phase Two: A quantitative test of the model


The variables and relationships included in our model (Figure 1) were tested using
survey evidence collected from parents, teachers, and school administrators in seven
Ontario school districts. Two sets of quite different dependent measures were used
for this test. One set were perceived outcomes for students, principals, teachers, and
parents as reported on the surveys. The second set were mean achievement levels on
448 K. Leithwood et al.
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

Figure 1. A framework for understanding differences across schools in the outcomes of school
improvement planning processes

provincially administered math and literacy tests at grade 3 and 62 (schools in Ontario
must take explicit account of their provincial math and language scores, among other
things, in developing their school improvement plans).

Sample
The population for Phase Two was the 362 elementary schools in 7 of the 10
districts that participated in our larger project. One district chose not to participate
in Phase 2 and another chose to survey only the Phase One case school; districts
cited concerns about staff workload as the reason for not participating in the survey.
The seven districts were representative of diversity in the province including public
and Catholic contexts; English and French jurisdictions; urban, suburban, and rural
areas; and locations in different regions of the province. They varied in size from
approximately 20 to 150 elementary schools. Two thirds of all elementary schools
within each district were randomly selected for a total of 226 schools, in addition to
the case school in the seventh district. In one district with small rural schools, a
stratified random selection procedure was used to ensure representation of the
smaller schools.
The Development and Testing of a School Improvement Model 449

The school was the unit of analysis for this study and three groups were sampled
within each school—administrators, teachers, and parents. School principals were
instructed to distribute the surveys to those teachers and parents who were most
involved in SIP and, where possible, were also members of the school council.
Response rates for administrators, teachers, and parents were 59%, 42%, and
37%, respectively. Although there were responses from 69% of the schools sampled,
not all schools had responses from all three sources. The criterion of two or more
teacher and parent respondents along with the administrator response was used to
determine the final sample of schools for analysis. Table 2 reports the achieved
sample for the 100 schools or 44% of the intended sample that met the criterion for
inclusion in the analysis. In addition to the administrator, sample schools had
responses from a mean of 4.5 teachers and 4 parents (a median of 5 teachers and 4
parents).
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

Instruments
Moving from the two sources of evidence used to build our model to a set of survey
questions for quantitative model testing entailed making very concrete which features
of each variable in the model help explain successful SIP. This required us to ‘‘drill
down’’ into our case study data and the literature to a level of specificity beyond what
has been described to this point. We did this as a way of constructing multi-item
scales to measure each variable in the framework and to examine the relationships
among those variables.
Table 3 summarizes the outcome of this effort. The far right column in Table 3
identifies the items (by number in the surveys) developed to measure each of these
conditions. Items created to measure each variable assume a specific stem which
reads approximately ‘‘To what extent do you agree that the [name of variable].’’ The
‘‘valence’’ of some items was reversed (worded negatively rather than positively) in
the survey instruments.
From the list of items described in Table 3, three overlapping survey instruments
were created, one for each of parents, teachers, and administrators. The content
validity of these surveys was addressed through our extensive literature review and the
use of our case study results in the formulation of survey questions. Discussions
within the research team produced the final set of survey items. Face validity was
addressed by submitting the draft instruments to four researchers each of whom had
conducted one of the case studies but who had not been involved in the initial
development and selection of survey items.3
At the end of the survey development process, the final 98 items were allocated to
one of the three surveys depending on who was most likely to have direct information
about the variable being measured. School administrators were given the largest
number of items (75) because of their broader knowledge of the planning process and
influences on it from within and outside the school. Teachers responded to 54 items
and parents to 45. Information for about two thirds of the items was obtained from at
least two sources.
450 K. Leithwood et al.

Table 2. From the evidence to the items: Key findings from our research

Key findings from Previous Relationship to


case studies research survey questions

School Improvement Planning Process


. Involving all stakeholders Flinspach & Ryan, 1992 3, 4
in the plan’s development Glover, Levacic, & Bennett,
contributes to completeness 1996
of the plan’s content, to a
common vision of future
directions, to ownership of the
plan by all, and to more
successful implementation.
. SIP is an ongoing process that MacGilchrist & Mortimore, 2
requires updating and 1997
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

revision of plans. Teddlie & Reynolds, 2000


. SIP must be recognized as a O’Donoghue & Dimmock, 1, 28, 40
priority by the school 1996
community. Sackney, Walker, & Hajnal,
1998
The Content of School Improvement Plan
. Content should address student Glickman, 1993 5, 6
learning, local needs, and Harris & Young, 2000
priorities but also be driven
by data.
. Content should be clear and Broadhead et al., 1998 7
focused. Limit plans to a Stoll & Fink, 1996
manageable number of focused.
Limit plans to specific goals and
initiatives with realistic timelines
and indicators of success.
. Selecting parent Henderson & Berla, 1994 8
involvement as a goal Sanders & Epstein, 1998
increases parent
participation which leads
to improved student outcomes.
Involvement in student learning
activities has the most direct
effect on achievement.
School Improvement Implementation Process
. Adequate time is needed for Griffith, 2001 10, 11
SIP. Consider when the best time Wilson & McPake, 1998
would be to start the process as it
is time consuming. Consider all
stakeholders’ schedules in
arranging meetings.
(continued)
The Development and Testing of a School Improvement Model 451

Table 2. (Continued)

Key findings from Previous Relationship to


case studies research survey questions

. A school culture that Eastwood & 63, 64, 65, 66


embraces ongoing Tallerico, 1990
improvement must be Stoll & Fink, 1996
created. Opportunities for staff
development and collaboration
are crucial. Recognition of hard
work and results is motivating.
. The organization learns from Sitkin, 1992 9
new ideas, different views, and Watkins & Marsick, 1993
problem-solving.
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

School Improvement Monitoring


. Monitoring of plans and Shields, 1995 13, 14, 15, 16, 17
actions allows participants to Teddlie & Reynolds, 2000
evaluate progress and to see if
alternatives should explored.
Seeing positive results increases
motivation and effort. A variety of
reliable data sources should be
employed in the monitoring and
evaluating of progress. In
successful schools, monitoring
was undertaken by school
leadership teams.
Communication and School Improvement
. Everyone in the school Dellar, 1994 17, 18, 19, 20, 21,
community needs to be kept Leithwood, Aitken, & 22, 23, 25, 30, 42
informed about plans and results. Jantzi, 2000
Communication must be
continual and varied in
methodology.
. In meetings, the creation of a Bauer & Bogotch, 2001 21, 24
comfortable discussion climate
fosters open communication, and
builds understanding and trust.
School Leadership
. School leadership or Broadhead et al., 1998 33, 34, 35
improvement teams are key in Harris & Young, 2000
successfully instituting SIP.
When teachers, parents, and
administrators work together as a
team, school improvement
results. Principals must
enable these teams to carry out
(continued)
452 K. Leithwood et al.

Table 2. (Continued)

Key findings from Previous Relationship to


case studies research survey questions

their responsibilities by providing


resources and support, and by
empowering teams. Principal
continuity, in the long run, is less
important when empowered,
active SIP teams are in place.
. The principal is Reeves, 2000 26, 27, 28, 29, 30,
instrumental in initiating SIP, in Wilson & McPake, 1998 31, 32, 33, 34, 35,
promoting awareness of and 36
participation in SIP, in ensuring
its continuation as a priority.
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

Principals who act as facilitative


leaders encourage successful
implementation of SIP.
. Teachers must be energetic and Earl & Lee, 1998 37, 38, 39, 40, 41,
enthusiastic about SIP as they O’Donoghue & 42, 43, 44, 45, 46,
shoulder much of the Dimmock, 1996 47
responsibility for planning and
implementation. Teachers are in
the majority on most school
improvement teams. They are
very influential in the SIP
process.
Parent Participation
. Principals, teachers, and parents Griffith, 2001 23, 24, 25, 29, 30,
on school councils or school Hoover-Dempsey & 41, 95, 96, 97, 98,
improvement teams are Sandler, 1997 99, 100, 101, 102,
instrumental in encouraging 103, 104
parent participation. Personal
invitations are better than general
requests for involvement. Parents
participate in activities if they are
willing and able to help, if it
affects their children, and if they
feel their input is welcome and
needed. Recognition is
important.
. Parents and school Hatton, 2001 48, 49, 50
councils contribute ideas and Stoll & Fink, 1996
support in the planning stage
(i.e., parents gave advice through
surveys, interviews, meetings of
the school council, and school
improvement teams while schools
(continued)
The Development and Testing of a School Improvement Model 453

Table 2. (Continued)

Key findings from Previous Relationship to


case studies research survey questions

councils helped to initiate parent


participation in SIP and approved
plans). Parent input adds vital
information to plans. Council
approval of directions builds
ownership of and commitment to
actions. Parents exert much
influence in the planning stage.
. Parents and school councils Flinspach & Ryan, 1992 48, 50, 51
mainly provide support in the MacGilchrist & Mortimore,
implementation stage, although a 1997
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

few parents were decision-makers


on school improvement teams
and in charge of select
implementation activities. Parents
lighten teachers’ workload when
they help with implementation;
thereby avoiding teacher burnout.
However, parent influence
diminishes at the implementation
stage.
. In the main for SIP Leithwood, Jantzi, & 48, 49, 50, 51
purposes, schools councils are Steinbach, 1999
vehicles for gathering and
disseminating information, and for
approval of plans/actions rather
than decision-making bodies.
School improvement teams have
more influence in SIP than school
councils.
Teacher Collaboration
. Organizational structures Earl & Lee, 2000 63, 64, 65, 66
and processes need to be Hargreaves & Macmillan,
established that support teacher 1991
collaboration and organizational
learning.
District and Provincial Support and Context
. External support is necessary, Giles, 1998 67, 68, 69, 70, 71,
especially in the adoption/ McBee & Fink, 1989 72, 73, 74
planning stages of SIP (i.e., a
facilitator, staff development,
extra resources for release time,
and for SIP initiatives). Ongoing
support would be welcomed in
(continued)
454 K. Leithwood et al.

Table 2. (Continued)

Key findings from Previous Relationship to


case studies research survey questions

schools which successfully


implement SIP, but it is crucial
for those schools which are
struggling with the process.
Outcomes
. School personnel experience an Dellar, 1995 75, 76, 77, 78, 79,
increased workload as a result of McInerney & Leach, 80, 81, 82, 83, 84,
SIP. However, most view SIP as 1992 85
worth the effort because of positive
outcomes such as: personal
development, enhanced school
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

culture with school-wide,


team-based improvement
approaches, increased parent
involvement, and improved
student outcomes.
. Schools with strong school Glover et al., 1996 13, 14
improvement teams who MacGilchrist & Mortimore,
remained proactive throughout 1997
the SIP process were successful.
. Learning assessment procedures Earl & Lee, 1998 80
and practices were strengthened.
. Student outcomes improve with McInerney & Leach, 1992 86, 87, 88
SIP. Improvements in student Reeves, 2000
attitudes, behaviour, learning,
and achievement occur in schools
which successfully implement
SIP. Some improvements are
evident at schools which have
partial success in implementing
SIP.
. Increased interaction and Bauch & 89, 91, 92, 93, 94
communication between Goldring, 1995
home and school results in better Haynes, Comer, &
home/school relations. This Hamilton-Lee, 1989
increases parent involvement.
. Parents become more involved in Hatton, 2001 52, 53, 54, 55, 56,
their children’s education at Sanders & Epstein, 1998 57, 58, 62, 93, 94
home and at school. Initially,
parents who are newer to schools,
who are less familiar with the
working language of schools, and
who are less educated are drawn
to social activities. Then, schools
(continued)
The Development and Testing of a School Improvement Model 455

Table 2. (Continued)

Key findings from Previous Relationship to


case studies research survey questions

can build on this involvement to


invite parents into other school-
related activities. More parents
become engaged in student
learning initiatives. Parents who
have experience with schools and
who are better educated become
involved in school councils, SIP
teams, and decision-making
roles.
. Parents become more Hatton, 2001 62, 90, 92
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

educated about their role Saunders &


in education through SIP. They Epstein, 1998
become more knowledgeable
about educational matters and
the school system.

Table 3. SIP intended and achieved samples for schools with 2 or more respondents

Administrators Teachers Parents

District code Intended Achieved Intended Achieved Intended Achieved

100 96 30 576 137 576 124


200 31 15 186 59 186 53
300 32 22 192 102 192 93
400 26 17 156 73 156 59
500 15 14 90 71 90 63
600 26 1 156 4 156 4
700 1 1 6 6 6 4
Total 227 100 1362 452 1362 400

Data Collection Procedures


A staff member in each district agreed to be a contact person for the survey and
arrange distribution and collection of materials through the board courier system.
The research team prepared a package of materials for each school that included a
letter to the principal with instructions for instrument distribution within the school
and a set of 13 envelopes, each containing a covering letter and the relevant
instrument. Principals were instructed to complete the administrator instrument or
give it to a vice principal depending upon who was most involved with the school
improvement planning process. The six teacher and six parent instruments were also
456 K. Leithwood et al.

to be given to those individuals most involved in the school improvement planning


process and who might also be members of the school council. Upon completion of
the survey, respondents were instructed to seal it in the envelope and return it to the
district contact person identified by the label on the envelope. The unopened
envelopes were returned to the research team by courier. Data from the returned
surveys were scanned into an SPSS database for analyses with a school code as the
only identification retained in the database.
Reading, writing, and mathematics achievement data were obtained from the
results of the provincial tests for students in Grades 3 and 6. These data were
available from the provincial testing agency’s website for the 2000 – 2001 and 2001 –
2002 school years for 88 of the 100 schools in the sample.

Data Analysis
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

Individual data files for each group were cleaned and the three datasets were
combined into a large file containing all 1,251 cases. This became the working file for
computing scales and then aggregating the data by school for further analyses. SPSS
was used to compute the scales, aggregate the data, and then to calculate means,
standard deviations, scale reliabilities (Cronbach’s alpha) for all scales measuring the
variables, and correlation coefficients. Three items with a negative effect on scale
reliability were removed from three scales. Independent sample t tests and analysis of
variance (oneway ANOVA) procedures were used to compare ratings from the three
sources for identical measures to determine whether there was a pattern of ratings by
source that could skew results solely due to the number of respondents. Factor
analyses using principal components extraction with varimax rotation was undertaken
to analyze the 17 scales and five aggregate variables to estimate the number of factors
measured by specific items and to determine the extent to which our conceptual
distinctions (see Figure 1) could be verified empirically.
LISREL was used to assess the overall model’s effects, as well as the direct and
indirect effects of school leadership, parent participation, and other variables in the
framework on mean student achievement, as measured by the provincial tests of
literacy and mathematics in Grades 3 and 6, and on perceived outcomes for students,
principals, teachers, and parents as reported on the surveys. This path analytic
technique allows for testing the validity of inferences about relationships between
pairs of variables by controlling for the effects of other variables.

Results
Quality of the evidence. Table 4 reports the sources of data for each variable, response
means, and standard deviations aggregated to the school level, scale reliabilities, and
number of items in each of the scales. The internal reliability of all scales are
acceptable, ranging from .72 to .96.
Results of the factor analyses indicated that, in 17 of the 22 analyses, only one
factor was extracted from the individual items or scales analyzed. Items measuring
The Development and Testing of a School Improvement Model 457

Table 4. Source of data, mean, standard deviation, and reliability for variables in the framework

Number
Sourcea Meanb SD Reliabilityc of items

School leadership (aggregate) 3.90 .31 .65


Principal A, T, P 4.23 .38 .94 11
Teacher A, T 3.82 .31 .92 11
Parents A, P 3.66 .49 .88 4
Parent participation (aggregate) 3.95 .49 .73
School P 3.85 .54 .92 5
Home P 4.04 .56 .96 6
External support and context (aggregate) 3.53 .62 .80
District A 4.04 .66 .74 4
Provincial A 3.02 .83 .78 4
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

SI planning processes A, T 3.71 .33 .72 4


Contents of the SIP A, T 4.11 .30 .79 4
Implementation processes (aggregate) 3.74 .30 .74
Implementation of SIP A, T 3.74 .39 .85 4
Monitoring of SIP A, T 3.88 .30 .75 5
Communication of SIP A, P 3.71 .42 .81 8
Teacher collaboration T 3.62 .42 .80 4
Outcomes
Students A, T, P 3.81 .36 .91 3
Principals A 3.98 .68 .78 4
Teachers combined T 3.35 .40 .75 7
Instruction T 3.82 .42 .85 4
Working conditions T 3.29 .61 .72 3
Parents A, P 3.53 .49 .93 6
a
A ¼ Administrators, T ¼ Teachers, P ¼ Parents.
b
rating scale: 1 ¼ strongly disagree to 5 ¼ strongly agree.
c
Cronbach’s alpha.

teacher leadership loaded on two factors but neither was conceptually cohesive nor
did reliability improve when the factors were treated as separate scales. Items
measuring principal leadership and communication loaded on two factors; one factor
contained items with principal ratings only and the second factor included items also
rated by parents. Teacher outcomes loaded on two factors: outcomes for instructional
practices and effects on working conditions. Separate scales were developed for each
factor extracted from the teacher outcome measures.
Results of the t test and ANOVA indicated a mixed pattern of responses from
different sources. Administrators generally rated measures higher than teachers, while
parents sometimes were higher and other times lower than administrators. However,
these differences at the individual level did not appear to affect ratings when
aggregated by school since the number of respondents from a particular source did
not predict the school’s rating of a measure. Principal leadership was given the highest
rating (m ¼ 4.23), indicating that most respondents agreed their principal was
458 K. Leithwood et al.

providing effective leadership for the school improvement effort. Parent leadership
was given the lowest leadership rating (m ¼ 3.66) of the three sources of leadership.
Parents generally agreed (m ¼ 4.04) that they participated in their child’s learning at
home but were somewhat less certain that they participated at school (m ¼ 3.85) or
that the school improvement effort had outcomes for parents (m ¼ 3.53).

Testing the Overall Model


To explore the relationships among the variables in the SIP model, a series of
LISREL analyses was conducted using alternative measures of potential outcomes of
the school improvement effort as the dependent variables. One model tested effects
on student provincial test scores in the 88 schools for which achievement data were
available (percentage of students performing at levels 3 and 4). We used combined
reading, writing, and math scores averaged over 2 years as the measure in
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

acknowledgement of Linn’s (2003) argument that such aggregation substantially


improves the accuracy of this sort of data. Also tested were models using as the
dependent measures, respondents’ perceptions of the outcomes of school improve-
ment on students, principals, teachers, and parents in the 100 schools in our sample.
Table 5 reports the total effects for the variables in the reduced framework on
students’ mean achievement score, and the outcomes measured on the survey. Four
of the five models met the criteria for assessment of model fit. Only the model using

Table 5. Standardized total effects for independent and mediating variables on mean student
achievement and perceived outcomes for students, principals, teachers, and parents

Dependent variables

Achievement
scores Perceived outcomes for

2 year 2 year
mean gain Students Principals Teachers Parents1

Independent variables:
Out of school support .01 .00 7.06* .25* 7.07 7.02
Parent participation 7.15 7.05 .20* .04 7.08 .49*
School leadership .11 7.06 .48* .04 .55* .25*
Mediating variables:
Planning 7.01 .00 .30* .00 .34* .09
Implementation .35* 7.17 .45* .09 .51* .40*
process
Contents of SI plan 7.23 .11 .28* 7.05 .33* 7.06
Percentage of explained 7% 2% 51% 7% 46% 47%
variance for DV

*Significant effects, t41.96.


1
Inadequate model; most indices assessing model fit do not meet criterion.
The Development and Testing of a School Improvement Model 459

perceived parent outcomes as the dependent measure had an inadequate fit with the
data. As the second column in Table 5 indicates, implementation processes had a
significant effect on students’ mean achievement over the 2 years. At the same time,
several variables (parent participation, contents of the plan, and planning) had non-
significant, negative, effects on mean student achievement.
With respect to perceived student outcomes, school leadership had the strongest
relationship at .48 and parent participation had the second weakest, but still
significant, relationship (.20) on perceived student outcomes. Out-of-school support
was the only variable with a significant relationship with outcomes for principals.
School leadership (.55) and implementation processes (.51) had the strongest
relationship with perceived outcomes for teachers, whereas parent participation had
no significant relationship. Although parent participation (.49) had the strongest
relationship with parent outcomes, the evidence must be treated with caution because
the model as a whole did not meet the fit criteria.
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

The model which tested effects on perceived outcomes for students explained the
largest proportion of variation in outcomes (51%), whereas the models testing mean
student achievement and outcomes for principals explained the smallest variation
(7%). Almost seven times as much of the variation in outcomes for teachers as
compared with principals was explained by this model (46% vs. 7%).

Summary and Conclusion


Our introduction to this paper pointed to some widely used approaches for improving
student achievement: the creation of markets (e.g., Raywid, 1992); restructuring
schools through various forms of school-based management (Whitty et al., 1998);
standards setting (e.g., Feuerstein & Dietrich, 2003); and whole school reform
(Herman, 1999). We noted that evidence about the effects of these external-to-the-
school alternatives on student achievement is most impressive, although not
overwhelming, for whole school reform but mixed, at best, for the others.
An array of internal-to-the-school initiatives offer more compelling evidence of
impact; for example, creation of professional learning communities (e.g., Toole &
Seashore Louis, 2002), building collaborative cultures (e.g., Fieman-Nemser &
Floden, 1986; Hargreaves & Macmillan, 1991; Little, 1989), providing school-based
professional development for teachers (e.g., Harris, Muijs, Chapman, Stoll, & Russ,
2003), and involving teachers in action research (e.g., Wideman, 2002).
Of course, these internal and external approaches to change are often combined in
unique configurations. Whatever the configuration, however, SIP is almost always a
component. Typically it is conceived of as the foundation or framework around which
other change initiatives are built at the local level. In some jurisdictions, such as
Ontario, SIP has served as by far the dominant strategy for increasing student
learning. And this is the case, as we noted in our review of literature, in spite of the
very modest amount of prior evidence about its contribution to student learning.
Indeed, to the extent that school improvement planning is comparable to ‘‘strategic
planning,’’ Mintzberg (1994) declared it ‘‘dead’’ 10 years ago.
460 K. Leithwood et al.

Our study inquired about the effects on student achievement and on outcomes for
teachers, parents, and administrators, of a very robust version of SIP. Our two-
phased, mixed-methods study was conducted in a context largely free of other
provincial capacity-building initiatives, but a context that did offer motivational
inducements for change in the form, for example, of curriculum standards and a
standards-driven provincial testing system that resulted in the public rankings of
schools in some districts (see Leithwood, Jantzi, & Steinbach, 2002, for the other
inducements). In this context, SIP was the government’s strategy of choice for
improving the performance of low performing or ‘‘failing’’ schools (not a label
publicly used in Ontario). And although other goals could be included, all schools
were required to focus on some set of literacy and numeracy skills in their school
improvement plans.
The first, qualitative, phase of our study combined evidence from a longitudinal
study carried out in 10 schools with a review of prior research, to produce a robust
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

model of SIP. Phase two tested this model quantitatively using three measures of
impact: perceptions of student impact by teachers, parents, and administrators;
perceptions of outcomes for teachers, parents, and administrators; and a combined
Grade 3 and 6 reading, writing, and math achievement score averaged over 2 years.
These concluding remarks focus on four results of the study and selected implications
for research and practice.
First, estimates of SIP effects on students depended very much on the ‘‘instruments’’
used to measure those effects; judgments of teachers, parents, and administrators were
largely unrelated to the results of provincial test scores, a result very similar to the
evidence reported by Flinspach and Ryan (1992) and described in our introduction.
One might argue that those teachers, administrators, and parents directly involved with
the SIP processes had opportunities to develop an appreciation of impact much more
nuanced and detailed than was possible with a provincial achievement test; one of the
reviewers strongly advocated this explanation. But it might also be the case that those
directly involved in SIP simply had a tacit stake in their own success and wildly
miscalculated their actual contribution to improving the achievement of students. This
possibility must at least be entertained seriously for two reasons: First, such errors in
human judgment are common and have been well documented (Khaneman, Slovic, &
Tversky, 1982); and, second, actually improving achievement has proven to be an
extraordinarily difficult and badly underestimated challenge, even when vastly greater
resources are devoted to it (e.g., as with Comprehensive School Reforms) than are
typical of the resources usually available for SIP.
The overall effects of our robust SIP model explained a significant amount of the
variation in student achievement, a second noteworthy result of our research. While
some will consider the amount of variation (7%) to be quite small, it takes on
considerable importance when compared with the total variation typically explained
by all factors associated with schools. Such variation is usually estimated to be from
12% to 20% using indicators of achievement similar to those used in this study (e.g.,
Leithwood & Jantzi, 1999). So some things included in our overall model of SIP
processes clearly add value to students’ school experiences.
The Development and Testing of a School Improvement Model 461

A third important result of our study was confirmation of considerable earlier


evidence about school leaders being critical to the success of SIP processes (e.g.,
Harris et al., 2003). This is leadership provided by school administrators and by
teacher leaders working as members of their school improvement teams. In contrast,
we found little support for parent leadership in the SIP process in spite of the central
focus of parents in our larger study and recent policy initiatives (e.g., school councils)
aimed at providing a greater voice for parents in school decision-making. While this
result will come as no surprise to those familiar with the relevant empirical evidence,
such evidence has done little to dampen the enthusiasm of policy-makers for what
may be primarily political reasons—it is hard to deny that parents ought to have a say
in how their children’s’ schools are run.
Parent participation is a complex matter, however. For example, like us,
Chrispeels, Castillo, and Brown (2000) found that parent leadership in the school
improvement process had little positive effect on teaching and learning; indeed,
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

involving parents and students in the school improvement team had a negative effect
on such outcomes. However, asking students and parents for input in the planning
process had a positive effect on the teams’ focus on teaching and learning. One might
conclude from this evidence that involving a small number of parents in school
decision-making helps keep the focus of SIP where most believe it should be—
student outcomes—but that the majority of parents should be encouraged to devote
most of the preciously little time they have available to helping educate their own
children. Providing such encouragement likely falls to those in school leadership roles
who will need to work with teachers who may still be reluctant about finding a
meaningful role for parents in the instruction of their students.
Fourth, within our SIP model as a whole, those processes associated with
implementation of the improvement plan accounted for by far the largest effect on
student test scores. This included opportunities for staff development, the ability of
the school, as a whole, to learn from new ideas and to problem-solve, and
collaboration among those in the school. SIP implementation also encompassed
shared norms of continuous improvement, recognition of hard work and results, and
structures in the school that allow staff the time to problem-solve together.
Indeed, neither the content of the plan nor the processes used to develop it had any
significant effect on at least test score estimates of student learning. This may come as
a shock to many administrators and consultants who agonize over the planning
process itself, worrying, for example, about how planning can be carried out in highly
participatory ways in order to ensure high levels of commitment to the plan of
teachers and parents. We cannot say from our data that they are wasting their time.
But sorting out whether variations in the processes leading up to the formulation of
school improvement plans matter (and under what conditions) is clearly a worthwhile
goal for future SIP research.
While seeming to fly in the face of the many admonitions to involve all stakeholders
from the outset of a change initiative, we speculate that under some conditions often
faced by low performing schools (e.g., impending school reconstitution), it may be a
more productive use of scarce time for at least trusted school leaders, using reliable
462 K. Leithwood et al.

and transparent data, to first determine the most urgent goals for school improvement
and then enlist the energies of those who must be involved in achieving the goals.
Exploring the relationship between the context in which schools find themselves and
the most productive approaches to improvement planning is an important goal for
further research.

Notes
1. Initially sponsored by the Ontario government’s Education Improvement Commission (EIC).
When EIC closed its doors, the Canadian Education Association assumed responsibility for
supervision of the project.
2. These tests are administered by the Educational Quality and Accountability Office (EQAO).
3. These researchers included Patricia Allison, Susan Drake, Dany Laveault, Ronald Wideman,
and Glen Zederayko.
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

References
Bauch, P., & Goldring, E. (1995). Parent involvement and school responsiveness: Facilitating the
home-school connection in schools of choice. Educational Evaluation and Policy Analysis,
17(1), 1 – 21.
Bauer, S. C., & Bogotch, I. E. (2001). Analysis of the relationships among site council resources,
council practices and outcomes. Journal of School Leadership, 11, 98 – 119.
Broadhead, P., Hodgson, J., Cuckle, P., & Dunford, J. (1998). School development planning:
Moving from the amorphous to the dimensional and making it your own. Research Papers in
Education, 13(1), 3 – 18.
Chrispeels, J., Castillo, S., & Brown, J. (2000). School leadership teams: A process model of team
development. School Effectiveness and School Improvement, 11, 22 – 56.
Dellar, G. B. (1994, April). Implementing school decision-making groups: A case study in restructuring.
Paper presented at the annual meeting of the American Educational Research Association,
New Orleans, LA.
Dellar, G. B. (1995). The impact of school-based management on classroom practice at the
secondary school level. Issues in Educational Research, 5(1), 23 – 34.
Earl, L., & Lee, L. (2000). Learning for a change: School improvement as capacity building.
Improving Schools, 3(1), 30 – 38.
Earl, L. M., & Lee, L. (1998). Evaluation of the Manitoba School Improvement Program. Toronto,
Canada: OISE/UT.
Eastwood, K., & Tallerico, M. (1990). School improvement planning teams: Lessons from practice.
Planning and Changing, 21(1), 3 – 12.
Feuerstein, A., & Dietrich, J. (2003). State standards in the local context: A survey of school board
members and superintendents. Educational Policy, 17(2), 237 – 257.
Fieman-Nemser, S., & Floden, R. E. (1986). The cultures of teaching. In M. Wittrock (Ed.),
Handbook of research on teaching (pp. 505 – 526). New York: Macmillan.
Flinspach, S. L., & Ryan, S. P. (1992). Vision and accountability in school improvement planning.
Chicago: Chicago Panel on Public School Policy and Finance.
Fullan, M., & Hargreaves, A. (1991). What’s worth fighting for in your school? Milton Keynes, UK:
Open University Press.
Giles, C. (1998). Control or empowerment: The role of site-based planning in school improvement.
Educational Management and Administration, 26(4), 407 – 415.
Glickman, C. D. (1993). Renewing America’s schools. A guide for school-based action. San Francisco:
Jossey-Bass.
The Development and Testing of a School Improvement Model 463

Glover, D., Levacic, R., & Bennett, N. (1996). Leadership, planning and resource management in
four very effective schools. Part 11: Planning and performance. School Organisation, 16(3),
247 – 261.
Griffith, J. (2001). Principal leadership of parent involvement. Journal of Educational Administration,
39(2), 162 – 186.
Hargreaves, D. H., & Hopkins, D. (1991). The empowered school: The management and practice of
development planning. London: Cassell.
Hargreaves, A., & Macmillan, R. (1991, April). Balkanized secondary schools and the malaise of
modernity. Paper presented at the annual meeting of the American Educational Research
Association, San Francisco.
Harris, A. (2001, January). Change at the learning level. Paper presented at the International
Congress for School Effectiveness and Improvement, Toronto, Canada.
Harris, A., Muijs, D., Chapman, C., Stoll, L., & Russ, J. (2003, May). Raising attainment in schools
in former coalfield areas (Research Rep. 423 prepared for the Department for Education and
Skills). University of Warwick, UK.
Harris, A., & Young, J. (2000). Comparing school improvement programmes in England and
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

Canada. School Leadership & Management, 20(1), 31 – 42.


Hatton, E. (2001). School development planning in a small primary school: Addressing the
challenge in rural NSW. Journal of Educational Administration, 39(2), 118 – 133.
Haynes, N. M., Comer, J. P., & Hamilton-Lee, M. (1989). School climate enhancement through
parental involvement. Journal of School Psychology, 27, 87 – 90.
Heistad, D., & Spicuzza, R. (2000, April). Measuring school performance to improve student
achievement and to reward effective programs. Paper presented at the annual meeting of the
American Educational Research Association, New Orleans, LA.
Henderson, A., & Berla, N. (Eds.). (1994). A new generation of evidence: The family is critical to
student achievement. Columbia, MD: National Committee for Citizens in Education.
Herman, R. (1999). An educators’ guide to school wide reform. Arlington, VA: American Association of
School Administrators.
Hoover-Dempsey, K. V., & Sandler, H. M. (1997). Why do parents become involved in their
children’s education? Review of Educational Research, 67(1), 3 – 42.
Khaneman, D., Slovic, P., Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and
biases. Cambridge, MA: Cambridge University Press.
Lauder, H., & Hughes, D. (1999). Trading in futures. Buckingham, UK: Open University
Press.
Leithwood, K., Aitken, R., & Jantzi, D. (2000). Making schools smarter (2nd ed.). Thousand Oaks,
CA: Corwin Press.
Leithwood, K., & Jantzi, D. (1999). Transformational school leadership effects. School Effectiveness
and School Improvement, 10, 451 – 479.
Leithwood, K., Jantzi, D., & McElheron-Hopkins, C. (2005). Parent participation in school
improvement planning. Toronto, Canada: Canadian Education Association.
Leithwood, K., Jantzi, D., & Steinbach, R. (1999). Changing leadership for changing times.
Buckingham, UK: Open University Press.
Leithwood, K., Jantzi, D., & Steinbach, R. (2002). School leadership and teacher motivation to
implement accountability policies. Educational Administration Quarterly, 38(1), 94 – 119.
Leithwood, K., & Menzies, T. (2000). Forms and effects of school-based management: A review.
Education Policy, 12(3), 325 – 346.
Linn, R. (2003). Accountability, responsibility and reasonable expectations. Educational Researcher,
32(7), 3 – 13.
Little, J. W. (1989). District policy choices and teachers’ professional development opportunities.
Educational Evaluation and Policy Analysis, 11(2), 165 – 179.
MacGilchrist, B., & Mortimore, P. (1997). The impact of school development plans in primary
schools. School effectiveness and school Improvement, 8, 198 – 218.
464 K. Leithwood et al.

McBee, M. M., & Fink, J. S. (1989). How one school district implemented site-based school
improvement planning teams. Educational Planning, 7(3), 32 – 36.
McInerney, W. D., & Leach, J. A. (1992). School improvement planning: Evidence of impact.
Planning and Changing, 23(1), 15 – 28.
Mintzberg, H. (1994). The rise and fall of strategic planning: Reconceiving roles for planning, plans,
planners. New York: The Free Press.
O’Donoghue, T. A., & Dimmock, C. (1996). School development planning and the classroom
teacher: A Western Australian case-study. School Organisation, 16(1), 71 – 87.
Raywid, M. (1992). Choice orientations, discussions, and prospects. Educational Policy, 6(2), 105 –
122.
Reeves, J. (2000). Tracking the links between pupil attainment and development planning. School
Leadership & Management, 20(3), 315 – 332.
Sackney, L., Walker, K., & Hajnal, V. (1998). Leadership, organizational learning, and selected
factors relating to the institutionalization of school improvement initiatives. Alberta Journal of
Educational Research, 44(1), 70 – 89.
Sanders, J., & Epstein, J. (1998). School-family-community partnerships and educational change.
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010

In A. Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), International handbook of


educational change. Dordrecht, The Netherlands: Kluwer Academic Publishers.
Shields, C. M. (1995). The delusion of learning from experience: Lessons from an evaluation phase
of a comprehensive school improvement program. School Effectiveness and School Improvement,
6, 175 – 183.
Sitkin, S. (1992). Learning through failure: The strategy of small losses. In B. Stow & L. Cummings
(Eds.), Research in organizational behavior (Vol. 14, pp. 231 – 266). London: JAI Press.
Stoll, L., & Fink, D. (1996). Changing our schools. Buckingham, UK: Open University Press.
Teddlie, C., & Reynolds, D. (2000). The international handbook of school effectiveness research.
London: Falmer Press.
Toole, J., & Seashore Louis, K. (2002). The role of professional learning communities in
international education. In K. Leithwood & P. Hallinger (Eds.), Second international handbook
on educational leadership and administration (pp. 245 – 280). Dordrecht, The Netherlands:
Kluwer Academic Publishers.
Watkins, K. E., & Marsick, V. J. (1993). Sculpting the learning organization. San Francisco, CA:
Jossey-Bass.
Whitty, G., Power, S., & Halpin, D. (1998). Devolution and choice in education: The school, the state,
and the market. Buckingham, UK: Open University Press.
Wideman, R. (2002, June). Using action research to improve provincial test results. Paper presented at
the Canadian Society for the Study of Education, 30th annual conference, Toronto, Canada.
Wilson, V., & McPake, J. (1998). Managing change in small Scottish primary schools. Edinburgh,
Scotland: Scottish Council for Research in Education.
Wilson, V., & McPake, J. (2000). Managing change in small Scottish primary schools: Is there a
small school management style? Educational Management and Administration, 28(2), 119 – 132.

You might also like