Professional Documents
Culture Documents
To cite this Article Leithwood, Kenneth , Jantzi, Doris andMcElheron-Hopkins, Charryn(2006) 'The development and
testing of a school improvement model', School Effectiveness and School Improvement, 17: 4, 441 — 464
To link to this Article: DOI: 10.1080/09243450600743533
URL: http://dx.doi.org/10.1080/09243450600743533
This article may be used for research, teaching and private study purposes. Any substantial or
systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or
distribution in any form to anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation that the contents
will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses
should be independently verified with primary sources. The publisher shall not be liable for any loss,
actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly
or indirectly in connection with or arising out of the use of this material.
School Effectiveness and School Improvement
Vol. 17, No. 4, December 2006, pp. 441 – 464
This multimethod study generated and tested a ‘‘best evidence’’ model of school improvement
processes (SIP) capable of improving student achievement. Initially developed through the review
of a comprehensive body of previous empirical research, the model was further refined through a 2,
5-year longitudinal study in 10 schools. A quantitative test of this refined model was then conducted
using survey evidence from administrators, teachers, parents, and students in 100 elementary
schools. The model as a whole explained modest but significant amounts of variation in student
achievement across schools. School leadership and SIP implementation processes accounted for the
largest proportion of explained variation.
Introduction
Many schools labeled ‘‘failing’’ or ‘‘low performing,’’ in today’s accountability
context, are serving highly diverse student populations—diverse cultures, languages,
religions, and economic circumstances. Such diversity is challenging on two fronts—
the sheer range of educational needs that schools must take into account, and the
presence of significant numbers of children whose needs typically exceed the
capacities of many schools to adequately address. Because schools in most
jurisdictions are now being held accountable for teaching all students to the same
high standards, finding ways of addressing both types of challenges has become an
urgent matter.
In this paper, we develop a ‘‘best evidence’’ model of school improvement
processes in schools serving diverse student populations and test its effects on student
achievement in a broad sample of schools. We do this because school improvement
planning (SIP), and action guided by such planning, is arguably the most common
*Corresponding author. OISE/University of Toronto, 252 Bloor St. West, Toronto, Ontario M5S
1V6, Canada. Email: kleithwood@oise.utoronto.ca
ISSN 0924-3453 (print)/ISSN 1744-5124 (online)/06/040441–24
Ó 2006 Taylor & Francis
DOI: 10.1080/09243450600743533
442 K. Leithwood et al.
response to low performance and to the meeting of assigned achievement targets; this
is the case in spite of meager evidence about its effects, a matter we elaborate on
below. Sometimes SIP is invoked in combination with other strategies and
sometimes by itself. Widely discussed alternatives (or adjuncts) to SIP include the
creation of markets in order to increase competition among schools (e.g., Raywid,
1992), restructuring schools through various forms of school-based management
(Whitty, Power, & Halpin, 1998), standards setting (e.g., Feuerstein & Dietrich,
2003), and whole school reform (Herman, 1999). Evidence about the effects of these
alternatives on student achievement is most impressive for whole school reform (e.g.,
Herman, 1999). The effects of standard setting have not been subject to sufficient
evaluation to assess impact. Several other alternatives for which there is ample
evidence have failed to live up to their initial promise (for school-based management,
see Leithwood & Menzies, 2000; and for the creation of markets, see Lauder &
Hughes, 1999).
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010
The research reported in this paper was part of a larger study of school
improvement processes with special emphasis on parents roles in school improve-
ment planning (Leithwood, Jantzi, McElheron-Hopkins, 2005).1 A mixed-methods
research design was used to collect evidence in two distinct phases. The first phase
of our work was ‘‘theory generation’’; we asked ‘‘What do school improvement
processes consist of in their most powerful form?’’ Theory testing was the purpose
for the second phase of our research; in this phase we asked ‘‘What is the impact on
students and schools of school improvement processes in their most powerful
form?’’
Prior Evidence
Locating the evidence. Our review of prior research aimed at alerting us to key issues,
areas of robust knowledge, and problems concerning school improvement about
which not much was yet known (the full review can be obtained from the authors).
Relevant research was located initially through an online search of the ERIC system.
This search uncovered 29 documents on school development planning and 38 on
school improvement planning: The term ‘‘school development planning’’ is most
frequently used in the United Kingdom and Australia while the term ‘‘school
improvement planning’’ is most often used in Canada and the United States. Forty-
nine of these 67 documents were chosen for review based on their availability and
their relevance to our study. Of these, 33 were empirical studies undertaken in
The Development and Testing of a School Improvement Model 443
Australia, Canada, the United Kingdom (including England, Wales, Ireland, and
Scotland), and the United States.
Reviews of literature about both SIP and school effectiveness also were analyzed.
Reviews reported in The International Handbook of School Effectiveness Research
(Teddlie & Reynolds, 2000) were particularly helpful in providing an international
perspective on the field and further references about factors associated with school
effectiveness.
SIP impact. In spite of its widespread endorsement, our search uncovered a relatively
small amount of evidence concerning the organizational and student outcomes
associated with SIP. About organizational outcomes, McInerney and Leach (1992)
found that positive outcomes outnumbered negative outcomes two to one. The list of
positive outcomes in this study included increased awareness of the school’s
strengths and weaknesses, increased unity of staff, increased communication with
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010
School improvement planning processes. Just what are these school improvement
planning processes that have such effects? SIP processes were described in very
similar terms across studies, while the processes used to implement such plans varied
much more. Differences in organizational contexts (e.g., school culture/character-
istics and district/government role), and in the social dimensions of planning (e.g.,
leadership, collaboration, teamwork, communication, and decision-making) may
account for some of the variations in outcomes described in the previous section.
These processes usually are described as linear or cyclical, with several main stages.
Furthermore, engagement in these processes is assumed to be continuous; once the
The Development and Testing of a School Improvement Model 445
final stage in the cycle is reached, the process begins afresh with a focus on problems
discovered during implementation, or as new priorities arise.
Hargreaves and Hopkins (1991, pp. 4 – 5) outline a five-staged improvement
process: getting started; conducting an audit of the school’s strengths and weaknesses;
setting priorities and targets; implementation or putting the plans in place; and
evaluating the success of the plans and their implementation. These stages of SIP
described by Hargreaves and Hopkins are illustrative of the main stages reported in
other literature we reviewed (e.g., Flinspach & Ryan, 1992; Heistad & Spicuzza, 2000;
McBee & Fink, 1989; McInerney & Leach, 1992; Wilson & McPake, 2000).
The first stage in SIP involves activities and decisions leading to the adoption or
beginning of the planning process. In some cases, the decision to engage in school
developmental or improvement planning is mandated by a senior level of
government; this is the case, for example in Australia (Dellar, 1995; Hatton,
2001), the United Kingdom (Giles, 1998), and Chicago (Flinspach & Ryan, 1992).
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010
Adoption may also be a choice for schools, as in the Manitoba School Improvement
Program (Earl & Lee, 1998). Communications with stakeholders in the school
community about the planning process is typically part of this stage. In many schools,
a group or several teams are organized to participate in the planning process. Training
in the process of school development/improvement planning may be undertaken.
During the second ‘‘design’’ stage, schools determine what should be included in
their plan by incorporating requirements from district and senior levels of
government with school needs and priorities. They examine their strengths and
weaknesses (sometimes referred to as conducting an audit) using achievement data
and other pertinent information (MacGilchrist & Mortimore, 1997). A plan is
established according to a framework that requires action to be taken over a period of
time, usually 1 to 5 years. During this second design stage, consideration is given to
the school’s mission, its goals, indicators of success, responsibilities for carrying out
actions, the setting for improvement, and the timing.
During the implementation stage, plans are carried out at the classroom and/or
school level. Responsibilities for implementation may be shared by the principal,
teachers, school-based decision-making groups (or improvement teams), and other
stakeholders. Monitoring is sometimes viewed as part of the implementation stage
and is carried out for formative purposes. Monitoring the effects of the plan and the
processes used for its implementation allows schools to see where they are succeeding
or where they may need to make adjustments during the implementation process.
Evaluation is sometimes undertaken by external bodies and/or by the school itself.
It may be a formal requirement; this is the case, for example, in the UK with an
inspection service which judges the failure or success of schools and their
improvement efforts. An external evaluation is also part of the Manitoba School
Improvement Process. Evaluation also may be less formal and limited to school
personnel discussing progress towards goals as they have experienced it. Reporting on
the results of the planning process within the school community or beyond to districts
or governments is also a feature of the evaluation phase in some settings
(MacGilchrist & Mortimore, 1997). This serves both formative and summative
446 K. Leithwood et al.
Sample. Case study schools were located almost equally in public and Catholic school
districts. These were districts mostly in the southern and central part of the province,
but spread widely from east to west; one district was in the north. Schools were loca-
ted, in almost equal numbers, in urban, suburban, and rural locations (see Table 1).
Schools’ sizes ranged from a high of 850 students to a low of approximately 240 with
a mean size of about 400 students. Eight of the schools were elementary, usually JK to
8. Most were serving a high proportion of relatively needy students from lower
income families. Two schools served largely francophone populations, and one
school a predominantly Portuguese population. Physical facilities were generally
described as well maintained and several were relatively new. Provincial achievement
evidence for math and literacy in grades 3 and 6 indicated that all but one of the
schools were scoring below the provincial average making them prime candidates
within their districts for relatively aggressive school improvement initiatives.
Achievement
Case Location Size Level Language Type (Gr. 3/Gr. 6)*
*Mean percentage of students performing at Levels 3 and 4 on provincial tests in reading, writing,
and mathematics, for which 2000 – 2001 provincial means were 54% for both grades.
**The province does not post scores for schools as small as this one.
The Development and Testing of a School Improvement Model 447
Data collection. In each school, interview data were collected from members of the
school council (administrators, teachers, and parents), as well as a small number of
parents and teachers who were not members of the council. Between 5 and 12 people
were interviewed in each school on four separate occasions roughly evenly spaced
over 3 years. The same interview protocols were used in each school, but the nature of
the interviews changed from one data collection period to the next in order to track
changes in the schools. Researchers took detailed notes during all interviews and
ensured, as needed, the accuracy and completeness of these notes by reviewing audio-
taped records made of each interview. Interview and documentary evidence were
analyzed for each school separately, first. Results were then aggregated across the 10
schools as part of our model-building process.
Results
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010
A synthesis of prior research and the results of our aggregated 10 case studies were
used to construct a framework or model of school improvement planning processes.
Framework construction entailed, first, identifying the ‘‘factors’’ which our data
indicated were especially prominent in our case study schools and, second,
combining those factors with the results of our review of prior research.
Figure 1 is an overview of the framework resulting from these two sources of data.
It consists of a set of factors or variables and a general indication of how they are
related to one another. Variables 1 to 4, the core variables, are temporally related. A
set of processes (variable 1) initiates planning and eventually results usually in a
written plan with contents unique to the school and the processes that it has used
(variable 2). The plan has goals to be achieved and they have an intended influence
on the eventual outcomes of the process (variable 4). But activities undertaken to
accomplish those goals—or implement the plan (variable 3)—produce other
unplanned outcomes, as well.
At the top of Figure 1 are two variables which our data suggest are critical
determinants of the trajectory of the SIP processes, as well as the outcomes. These
variables, monitoring (variable 5) and communication (variable 6) may be carried out
in a variety of ways, and more or less well, but are not necessarily the responsibility of
any single person or group. At the bottom of Figure 1 are five additional variables
each consisting of a set of tasks undertaken by those in specific roles or positions.
Interactions among these variables occur as part of school improvement planning and
implementation processes.
Figure 1. A framework for understanding differences across schools in the outcomes of school
improvement planning processes
provincially administered math and literacy tests at grade 3 and 62 (schools in Ontario
must take explicit account of their provincial math and language scores, among other
things, in developing their school improvement plans).
Sample
The population for Phase Two was the 362 elementary schools in 7 of the 10
districts that participated in our larger project. One district chose not to participate
in Phase 2 and another chose to survey only the Phase One case school; districts
cited concerns about staff workload as the reason for not participating in the survey.
The seven districts were representative of diversity in the province including public
and Catholic contexts; English and French jurisdictions; urban, suburban, and rural
areas; and locations in different regions of the province. They varied in size from
approximately 20 to 150 elementary schools. Two thirds of all elementary schools
within each district were randomly selected for a total of 226 schools, in addition to
the case school in the seventh district. In one district with small rural schools, a
stratified random selection procedure was used to ensure representation of the
smaller schools.
The Development and Testing of a School Improvement Model 449
The school was the unit of analysis for this study and three groups were sampled
within each school—administrators, teachers, and parents. School principals were
instructed to distribute the surveys to those teachers and parents who were most
involved in SIP and, where possible, were also members of the school council.
Response rates for administrators, teachers, and parents were 59%, 42%, and
37%, respectively. Although there were responses from 69% of the schools sampled,
not all schools had responses from all three sources. The criterion of two or more
teacher and parent respondents along with the administrator response was used to
determine the final sample of schools for analysis. Table 2 reports the achieved
sample for the 100 schools or 44% of the intended sample that met the criterion for
inclusion in the analysis. In addition to the administrator, sample schools had
responses from a mean of 4.5 teachers and 4 parents (a median of 5 teachers and 4
parents).
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010
Instruments
Moving from the two sources of evidence used to build our model to a set of survey
questions for quantitative model testing entailed making very concrete which features
of each variable in the model help explain successful SIP. This required us to ‘‘drill
down’’ into our case study data and the literature to a level of specificity beyond what
has been described to this point. We did this as a way of constructing multi-item
scales to measure each variable in the framework and to examine the relationships
among those variables.
Table 3 summarizes the outcome of this effort. The far right column in Table 3
identifies the items (by number in the surveys) developed to measure each of these
conditions. Items created to measure each variable assume a specific stem which
reads approximately ‘‘To what extent do you agree that the [name of variable].’’ The
‘‘valence’’ of some items was reversed (worded negatively rather than positively) in
the survey instruments.
From the list of items described in Table 3, three overlapping survey instruments
were created, one for each of parents, teachers, and administrators. The content
validity of these surveys was addressed through our extensive literature review and the
use of our case study results in the formulation of survey questions. Discussions
within the research team produced the final set of survey items. Face validity was
addressed by submitting the draft instruments to four researchers each of whom had
conducted one of the case studies but who had not been involved in the initial
development and selection of survey items.3
At the end of the survey development process, the final 98 items were allocated to
one of the three surveys depending on who was most likely to have direct information
about the variable being measured. School administrators were given the largest
number of items (75) because of their broader knowledge of the planning process and
influences on it from within and outside the school. Teachers responded to 54 items
and parents to 45. Information for about two thirds of the items was obtained from at
least two sources.
450 K. Leithwood et al.
Table 2. From the evidence to the items: Key findings from our research
Table 2. (Continued)
Table 2. (Continued)
Table 2. (Continued)
Table 2. (Continued)
Table 2. (Continued)
Table 3. SIP intended and achieved samples for schools with 2 or more respondents
Data Analysis
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010
Individual data files for each group were cleaned and the three datasets were
combined into a large file containing all 1,251 cases. This became the working file for
computing scales and then aggregating the data by school for further analyses. SPSS
was used to compute the scales, aggregate the data, and then to calculate means,
standard deviations, scale reliabilities (Cronbach’s alpha) for all scales measuring the
variables, and correlation coefficients. Three items with a negative effect on scale
reliability were removed from three scales. Independent sample t tests and analysis of
variance (oneway ANOVA) procedures were used to compare ratings from the three
sources for identical measures to determine whether there was a pattern of ratings by
source that could skew results solely due to the number of respondents. Factor
analyses using principal components extraction with varimax rotation was undertaken
to analyze the 17 scales and five aggregate variables to estimate the number of factors
measured by specific items and to determine the extent to which our conceptual
distinctions (see Figure 1) could be verified empirically.
LISREL was used to assess the overall model’s effects, as well as the direct and
indirect effects of school leadership, parent participation, and other variables in the
framework on mean student achievement, as measured by the provincial tests of
literacy and mathematics in Grades 3 and 6, and on perceived outcomes for students,
principals, teachers, and parents as reported on the surveys. This path analytic
technique allows for testing the validity of inferences about relationships between
pairs of variables by controlling for the effects of other variables.
Results
Quality of the evidence. Table 4 reports the sources of data for each variable, response
means, and standard deviations aggregated to the school level, scale reliabilities, and
number of items in each of the scales. The internal reliability of all scales are
acceptable, ranging from .72 to .96.
Results of the factor analyses indicated that, in 17 of the 22 analyses, only one
factor was extracted from the individual items or scales analyzed. Items measuring
The Development and Testing of a School Improvement Model 457
Table 4. Source of data, mean, standard deviation, and reliability for variables in the framework
Number
Sourcea Meanb SD Reliabilityc of items
teacher leadership loaded on two factors but neither was conceptually cohesive nor
did reliability improve when the factors were treated as separate scales. Items
measuring principal leadership and communication loaded on two factors; one factor
contained items with principal ratings only and the second factor included items also
rated by parents. Teacher outcomes loaded on two factors: outcomes for instructional
practices and effects on working conditions. Separate scales were developed for each
factor extracted from the teacher outcome measures.
Results of the t test and ANOVA indicated a mixed pattern of responses from
different sources. Administrators generally rated measures higher than teachers, while
parents sometimes were higher and other times lower than administrators. However,
these differences at the individual level did not appear to affect ratings when
aggregated by school since the number of respondents from a particular source did
not predict the school’s rating of a measure. Principal leadership was given the highest
rating (m ¼ 4.23), indicating that most respondents agreed their principal was
458 K. Leithwood et al.
providing effective leadership for the school improvement effort. Parent leadership
was given the lowest leadership rating (m ¼ 3.66) of the three sources of leadership.
Parents generally agreed (m ¼ 4.04) that they participated in their child’s learning at
home but were somewhat less certain that they participated at school (m ¼ 3.85) or
that the school improvement effort had outcomes for parents (m ¼ 3.53).
Table 5. Standardized total effects for independent and mediating variables on mean student
achievement and perceived outcomes for students, principals, teachers, and parents
Dependent variables
Achievement
scores Perceived outcomes for
2 year 2 year
mean gain Students Principals Teachers Parents1
Independent variables:
Out of school support .01 .00 7.06* .25* 7.07 7.02
Parent participation 7.15 7.05 .20* .04 7.08 .49*
School leadership .11 7.06 .48* .04 .55* .25*
Mediating variables:
Planning 7.01 .00 .30* .00 .34* .09
Implementation .35* 7.17 .45* .09 .51* .40*
process
Contents of SI plan 7.23 .11 .28* 7.05 .33* 7.06
Percentage of explained 7% 2% 51% 7% 46% 47%
variance for DV
perceived parent outcomes as the dependent measure had an inadequate fit with the
data. As the second column in Table 5 indicates, implementation processes had a
significant effect on students’ mean achievement over the 2 years. At the same time,
several variables (parent participation, contents of the plan, and planning) had non-
significant, negative, effects on mean student achievement.
With respect to perceived student outcomes, school leadership had the strongest
relationship at .48 and parent participation had the second weakest, but still
significant, relationship (.20) on perceived student outcomes. Out-of-school support
was the only variable with a significant relationship with outcomes for principals.
School leadership (.55) and implementation processes (.51) had the strongest
relationship with perceived outcomes for teachers, whereas parent participation had
no significant relationship. Although parent participation (.49) had the strongest
relationship with parent outcomes, the evidence must be treated with caution because
the model as a whole did not meet the fit criteria.
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010
The model which tested effects on perceived outcomes for students explained the
largest proportion of variation in outcomes (51%), whereas the models testing mean
student achievement and outcomes for principals explained the smallest variation
(7%). Almost seven times as much of the variation in outcomes for teachers as
compared with principals was explained by this model (46% vs. 7%).
Our study inquired about the effects on student achievement and on outcomes for
teachers, parents, and administrators, of a very robust version of SIP. Our two-
phased, mixed-methods study was conducted in a context largely free of other
provincial capacity-building initiatives, but a context that did offer motivational
inducements for change in the form, for example, of curriculum standards and a
standards-driven provincial testing system that resulted in the public rankings of
schools in some districts (see Leithwood, Jantzi, & Steinbach, 2002, for the other
inducements). In this context, SIP was the government’s strategy of choice for
improving the performance of low performing or ‘‘failing’’ schools (not a label
publicly used in Ontario). And although other goals could be included, all schools
were required to focus on some set of literacy and numeracy skills in their school
improvement plans.
The first, qualitative, phase of our study combined evidence from a longitudinal
study carried out in 10 schools with a review of prior research, to produce a robust
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010
model of SIP. Phase two tested this model quantitatively using three measures of
impact: perceptions of student impact by teachers, parents, and administrators;
perceptions of outcomes for teachers, parents, and administrators; and a combined
Grade 3 and 6 reading, writing, and math achievement score averaged over 2 years.
These concluding remarks focus on four results of the study and selected implications
for research and practice.
First, estimates of SIP effects on students depended very much on the ‘‘instruments’’
used to measure those effects; judgments of teachers, parents, and administrators were
largely unrelated to the results of provincial test scores, a result very similar to the
evidence reported by Flinspach and Ryan (1992) and described in our introduction.
One might argue that those teachers, administrators, and parents directly involved with
the SIP processes had opportunities to develop an appreciation of impact much more
nuanced and detailed than was possible with a provincial achievement test; one of the
reviewers strongly advocated this explanation. But it might also be the case that those
directly involved in SIP simply had a tacit stake in their own success and wildly
miscalculated their actual contribution to improving the achievement of students. This
possibility must at least be entertained seriously for two reasons: First, such errors in
human judgment are common and have been well documented (Khaneman, Slovic, &
Tversky, 1982); and, second, actually improving achievement has proven to be an
extraordinarily difficult and badly underestimated challenge, even when vastly greater
resources are devoted to it (e.g., as with Comprehensive School Reforms) than are
typical of the resources usually available for SIP.
The overall effects of our robust SIP model explained a significant amount of the
variation in student achievement, a second noteworthy result of our research. While
some will consider the amount of variation (7%) to be quite small, it takes on
considerable importance when compared with the total variation typically explained
by all factors associated with schools. Such variation is usually estimated to be from
12% to 20% using indicators of achievement similar to those used in this study (e.g.,
Leithwood & Jantzi, 1999). So some things included in our overall model of SIP
processes clearly add value to students’ school experiences.
The Development and Testing of a School Improvement Model 461
involving parents and students in the school improvement team had a negative effect
on such outcomes. However, asking students and parents for input in the planning
process had a positive effect on the teams’ focus on teaching and learning. One might
conclude from this evidence that involving a small number of parents in school
decision-making helps keep the focus of SIP where most believe it should be—
student outcomes—but that the majority of parents should be encouraged to devote
most of the preciously little time they have available to helping educate their own
children. Providing such encouragement likely falls to those in school leadership roles
who will need to work with teachers who may still be reluctant about finding a
meaningful role for parents in the instruction of their students.
Fourth, within our SIP model as a whole, those processes associated with
implementation of the improvement plan accounted for by far the largest effect on
student test scores. This included opportunities for staff development, the ability of
the school, as a whole, to learn from new ideas and to problem-solve, and
collaboration among those in the school. SIP implementation also encompassed
shared norms of continuous improvement, recognition of hard work and results, and
structures in the school that allow staff the time to problem-solve together.
Indeed, neither the content of the plan nor the processes used to develop it had any
significant effect on at least test score estimates of student learning. This may come as
a shock to many administrators and consultants who agonize over the planning
process itself, worrying, for example, about how planning can be carried out in highly
participatory ways in order to ensure high levels of commitment to the plan of
teachers and parents. We cannot say from our data that they are wasting their time.
But sorting out whether variations in the processes leading up to the formulation of
school improvement plans matter (and under what conditions) is clearly a worthwhile
goal for future SIP research.
While seeming to fly in the face of the many admonitions to involve all stakeholders
from the outset of a change initiative, we speculate that under some conditions often
faced by low performing schools (e.g., impending school reconstitution), it may be a
more productive use of scarce time for at least trusted school leaders, using reliable
462 K. Leithwood et al.
and transparent data, to first determine the most urgent goals for school improvement
and then enlist the energies of those who must be involved in achieving the goals.
Exploring the relationship between the context in which schools find themselves and
the most productive approaches to improvement planning is an important goal for
further research.
Notes
1. Initially sponsored by the Ontario government’s Education Improvement Commission (EIC).
When EIC closed its doors, the Canadian Education Association assumed responsibility for
supervision of the project.
2. These tests are administered by the Educational Quality and Accountability Office (EQAO).
3. These researchers included Patricia Allison, Susan Drake, Dany Laveault, Ronald Wideman,
and Glen Zederayko.
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010
References
Bauch, P., & Goldring, E. (1995). Parent involvement and school responsiveness: Facilitating the
home-school connection in schools of choice. Educational Evaluation and Policy Analysis,
17(1), 1 – 21.
Bauer, S. C., & Bogotch, I. E. (2001). Analysis of the relationships among site council resources,
council practices and outcomes. Journal of School Leadership, 11, 98 – 119.
Broadhead, P., Hodgson, J., Cuckle, P., & Dunford, J. (1998). School development planning:
Moving from the amorphous to the dimensional and making it your own. Research Papers in
Education, 13(1), 3 – 18.
Chrispeels, J., Castillo, S., & Brown, J. (2000). School leadership teams: A process model of team
development. School Effectiveness and School Improvement, 11, 22 – 56.
Dellar, G. B. (1994, April). Implementing school decision-making groups: A case study in restructuring.
Paper presented at the annual meeting of the American Educational Research Association,
New Orleans, LA.
Dellar, G. B. (1995). The impact of school-based management on classroom practice at the
secondary school level. Issues in Educational Research, 5(1), 23 – 34.
Earl, L., & Lee, L. (2000). Learning for a change: School improvement as capacity building.
Improving Schools, 3(1), 30 – 38.
Earl, L. M., & Lee, L. (1998). Evaluation of the Manitoba School Improvement Program. Toronto,
Canada: OISE/UT.
Eastwood, K., & Tallerico, M. (1990). School improvement planning teams: Lessons from practice.
Planning and Changing, 21(1), 3 – 12.
Feuerstein, A., & Dietrich, J. (2003). State standards in the local context: A survey of school board
members and superintendents. Educational Policy, 17(2), 237 – 257.
Fieman-Nemser, S., & Floden, R. E. (1986). The cultures of teaching. In M. Wittrock (Ed.),
Handbook of research on teaching (pp. 505 – 526). New York: Macmillan.
Flinspach, S. L., & Ryan, S. P. (1992). Vision and accountability in school improvement planning.
Chicago: Chicago Panel on Public School Policy and Finance.
Fullan, M., & Hargreaves, A. (1991). What’s worth fighting for in your school? Milton Keynes, UK:
Open University Press.
Giles, C. (1998). Control or empowerment: The role of site-based planning in school improvement.
Educational Management and Administration, 26(4), 407 – 415.
Glickman, C. D. (1993). Renewing America’s schools. A guide for school-based action. San Francisco:
Jossey-Bass.
The Development and Testing of a School Improvement Model 463
Glover, D., Levacic, R., & Bennett, N. (1996). Leadership, planning and resource management in
four very effective schools. Part 11: Planning and performance. School Organisation, 16(3),
247 – 261.
Griffith, J. (2001). Principal leadership of parent involvement. Journal of Educational Administration,
39(2), 162 – 186.
Hargreaves, D. H., & Hopkins, D. (1991). The empowered school: The management and practice of
development planning. London: Cassell.
Hargreaves, A., & Macmillan, R. (1991, April). Balkanized secondary schools and the malaise of
modernity. Paper presented at the annual meeting of the American Educational Research
Association, San Francisco.
Harris, A. (2001, January). Change at the learning level. Paper presented at the International
Congress for School Effectiveness and Improvement, Toronto, Canada.
Harris, A., Muijs, D., Chapman, C., Stoll, L., & Russ, J. (2003, May). Raising attainment in schools
in former coalfield areas (Research Rep. 423 prepared for the Department for Education and
Skills). University of Warwick, UK.
Harris, A., & Young, J. (2000). Comparing school improvement programmes in England and
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010
McBee, M. M., & Fink, J. S. (1989). How one school district implemented site-based school
improvement planning teams. Educational Planning, 7(3), 32 – 36.
McInerney, W. D., & Leach, J. A. (1992). School improvement planning: Evidence of impact.
Planning and Changing, 23(1), 15 – 28.
Mintzberg, H. (1994). The rise and fall of strategic planning: Reconceiving roles for planning, plans,
planners. New York: The Free Press.
O’Donoghue, T. A., & Dimmock, C. (1996). School development planning and the classroom
teacher: A Western Australian case-study. School Organisation, 16(1), 71 – 87.
Raywid, M. (1992). Choice orientations, discussions, and prospects. Educational Policy, 6(2), 105 –
122.
Reeves, J. (2000). Tracking the links between pupil attainment and development planning. School
Leadership & Management, 20(3), 315 – 332.
Sackney, L., Walker, K., & Hajnal, V. (1998). Leadership, organizational learning, and selected
factors relating to the institutionalization of school improvement initiatives. Alberta Journal of
Educational Research, 44(1), 70 – 89.
Sanders, J., & Epstein, J. (1998). School-family-community partnerships and educational change.
Downloaded By: [Universidad de Sevilla] At: 07:26 26 April 2010