You are on page 1of 26

KJEP 8:1 (2011), pp.

3-28

School evaluation policies and systems in


Korea: A challenge of social validation

Juhu Kim
Ajou University, Korea

Juah Kim
Korean Educational Development Institute, Korea

Hoi K. Suen
Pennsylvania State University, U.S.A.

Abstract

For school evaluation policies and systems in Korea to be effective, they need
to have social validity. This paper reviews the historical context of school evaluation
policy development, school evaluation system design, and critical factors contributing
to the implementation of the policy from the perspective of social validity. The results
showed that the school evaluation policy has been evolving continuously. Although the
central government and local education offices have spent much time and energy on
the development of policies and systems, implementation has been slow and rocky. In
addition, the overall school evaluation policy is still facing several problems, including
the lack of utilization of achievement data, the need to further improve the quality of
evaluator judgments, and the need to continuously validate evaluation indexes. Despite
such problems, evaluation of school quality has become an important component of school
accountability, which is an aspect of education increasingly demanded by tax-payers. The
development in the last 15 years can be understood as a slow, arduous but steady process
of social validation toward a consensual school evaluation policy and system.

Keywords: s chool evaluation, evaluation policy, accountability, social validity, guality


of schooling

KEDI Journal of Educational Policy— ISSN 1739— 4341—


© Korean Educational Development Institute 2011, Electronic version: http://eng.kedi.re.kr
Juhu Kim, Juah Kim, & Hoi K. Suen

Introduction

Social validity as a concept refers to the extent to which the procedures, goals
and outcomes of research, of a psychological intervention, or of a social intervention
are regarded as important and acceptable to the population in general, and to
direct stakeholders in particular. It has long been considered a critical element of
assessment and intervention in special education and applied behavior analyses (e.g.,
Fawcett, 1991; Wolf, 1978) and clinical interventions (e.g., Foster & Mash, 1999); it is
considered an important aspect of validity for traditional quantitative psychometrics
(e.g., Messick, 1995), qualitative investigations (e.g., Patton, 2002), as well as mixed
method research (e.g., Dellinger & Leech, 2007; Onwuegbuzie & Johnson, 2006).
The process of social validation within the context of a single individual assessment
instrument generally involves the gathering of evidence of social congruence,
consensus, and absence of discrepancies across diverse sources of information
or stakeholders (e.g., Kramer, 2011). For systems of assessment and evaluation,
however, the process of validation may involve a lengthy process of consensus-
building across communities, beyond the relatively simple process of gathering
evidence of congruence. For large-scale and complex evaluation and assessment
systems, it may be necessary to conceptualize its social validation from a hermeneutic
or socio-cultural perspective (Moss, Girard, & Haniford, 2006).
The design and implementation of school evaluation policies in Korea can be
viewed as an illustration of the challenges and difficulties involved in the process of
social validation of such a large and complex system. In 1995, the Korean Presidential
Committee on Educational Innovations suggested that school evaluation should
be utilized for the improvement of school quality. Based upon suggestions by the
Presidential Committee, the Korean government developed a new policy aimed
not only to enhance the quality of schooling, but also to improve the quality of
accountability (J. Kim, T. Chung, J. Kim, & S. Kim, 2004). For these purposes, the
Korean government invested about 3 million dollars in the last decade to support
school evaluation activities. In coordination with the national policy, local education
offices also developed and implemented their own school evaluation polices.
With interest from the national government and support at the local level, school
evaluation activities have become widespread throughout the nation and have
contributed to the improvement of schooling.
Under the new policy, the Korean government designed a school evaluation
model focusing on interactions between external evaluators and school teachers
(J. Kim, Choi, Ryu & Lee, 1999; J. Kim, 2004). Based upon a qualitative evaluation
approach, professional evaluation teams visited schools and conducted in-depth
reviews of entire school systems through classroom observations, interviews with
teachers, school document analyses, and conversations with parents and students.
Unlike school evaluation models currently dominant in the United States and
Great Britain, both of which emphasize student achievement, the school evaluation

4
School evaluation policies and systems in Korea

approach in Korea has focused on a comprehensive analysis of school systems.


Thus, instead of analyzing student achievement data using quantitative analyses,
a comprehensive systemic analysis of school dynamics was pursued. After the
school visits by a team of evaluators, each school received an evaluation report
including an overall analysis of school management, quality of instruction, parental
involvement, student motivation, and so forth. The evaluation results were expected
to be used for new curriculum planning, improvement of instructional quality, better
communication between teachers and students, and parent involvement (J. Chung, J.
Kim, & J. Kim, 2004a).
In concordance with the government’s approach to school evaluation, local
education offices (LEOs) also developed their own school evaluation activities.
Although the LEOs have aimed for the same general goals of enhancing school
quality and constructing accountability systems as were proposed by the
government, their approaches were somewhat different. Instead of utilizing
qualitative reviews, they preferred to report more objective information using
measurable evaluation indices (J. Kim, T. Chung, et al., 2004). Additionally, their
evaluation systems were oriented toward checking whether or not LEOs followed
their own educational procedures. Thus, results of LEO school evaluations were
usually limited to the reporting of school statistics for administrators, rather than
providing contextual information about the school system.
In spite of the national interests and supports from LEOs, the impact of school
evaluation activities has been viewed as being negative. School teachers pointed out
that the overall national evaluation system was not contributing to the improvement
of school quality (Y. Kim, H. Kim, J. Yoon, J. Kim, & Huh, 2000). They believed
that the national evaluation system just investigated narrow aspects of schooling
at a surface level without examining overall school quality. Policy makers and
administrators also criticized the evaluation system for providing low level feedback
to the schools while incurring high management costs (Jeong, Yoo, Kim, Kim, & Kim,
2003; B. Kim, 2004).
With these criticisms, the government could not continue to implement the
evaluation system at the national level. After 2004, national-level evaluations ended
and evaluations were conducted only at the local education office level (T. Chung, K.
Nam, & J. Kim, 2008). In other words, although the policy was originally proposed at
the national level, school evaluation became an exclusively local concern. As of 2010,
16 local education offices were conducting school evaluations without much support
from the central government. The only support they received was to make a link
between the local education offices and a government-funded research institute, the
Korean Educational Development Institute (KEDI).
Although the school evaluation policy began with much national attention, it
was soon discovered that the goals were very difficult to achieve. Why the policy
was not successful has been debated continuously. The numerous discussions at
the national and the local level have focused on the gap between the intended goals

5
Juhu Kim, Juah Kim, & Hoi K. Suen

and the unintended consequences (T. Chung, J. Kim et al., 2004a; B. Kim, 2004). For
instance, as an example of an unintended consequence, school teachers spent much
time making glossy, good-looking reports. Since the evaluation process heavily
relied on site-visits, the quality of each school was based on the judgments of site-
visitors. Thus, in order to impress these external evaluators, teachers prepared
beautiful-looking reports with lots of graphs, diagrams, and tables. As time went on,
school teachers complained about these time-consuming tasks. The evaluation was
perceived as a one-shot summative inspection, rather than a new system that could
help improve the quality of schooling formatively (T. Chung, K. Nam, et al., 2008).
One of the more fundamental criticisms from field practitioners was about the
gap between the philosophical background behind the school evaluation policy and
schooling in reality within the context of Korean education. The philosophical belief
behind the school evaluation policy, a part of neo-liberal educational reforms (Park,
2009), was rooted in client-oriented education with school autonomy. So, it was
assumed that each school had its own plans for educational innovation, pursing the
improvement of school quality. In addition, the results of school evaluations were
supposed to be opened to the public. However, field practitioners did not experience
decentralization and democratization through the implementation of the school
evaluation policy, although the policy environment itself presumed the liberation
of the educational system. In actuality, the central government did not cede any
of the authority of the school system or the educational policy decision making. In
particular, administrators were hesitant to share the results of the school evaluations
with parents. As a result, it was not easy to reach a consensus about the meaning of
evaluating school quality as well as school evaluation policy.
Given these circumstances, one critical question arose: “What is the primary
rationale behind the school evaluation policy in Korea?” Another question was why
the policy encountered so many barriers during implementation in the educational
system. Were the problems beyond issues of process-related components (e.g.,
development of evaluation index, training of evaluators, analysis of survey data),
but related to social/contextual factors? These questions are critical ones about the
validity of the school evaluation system within the socio-cultural context of Korea.
Although there were a few research studies (T. Chung, J. Kim et al., 2004a; B. Kim,
2004; S. Kim, Chung, & Kim, 2009) that have investigated the quality of school
evaluation policies, they did not focus on the validity of school evaluation from wider
socio-cultural perspectives. Thus, the main purpose of this paper was to review
the school evaluation policy in Korea from the perspective of social validation. As
such, this paper also serves as part of the continuing social dialog toward a goal of
a final, valid system that will ensure the future of school quality. To understand the
components involved in this social validation process, it is necessary to examine:

6
School evaluation policies and systems in Korea

1) What are the educational and historical contexts of school evaluation policy
development within the Korean society?
2) How was the school evaluation system developed and redesigned for the
improvement of school quality?
3) What are some of the critical socio-cultural and contextual factors involved
in the implementation of school evaluation policies within the educational
system of Korea?

Analysis framework and procedure


Within the context of Korean education, school evaluation has been perceived
as a systemic process assessing the strengths and weaknesses of schools for the
improvement of school quality (Lee, Kim, & Lee, 1999). Given this understanding
about the meaning of school evaluation, school evaluation was defined as “a
systemic process assessing values of school components and functions using valid
criteria and standards” (T. Chung, J. Kim, T. Lee, & J. Kim, 2005). Based upon this
definition, the researchers analyzed school evaluation policies defined as political
decisions to address school quality and to achieve the goal of school evaluation. In
this study, under the school evaluation policy, the school evaluation system refers
to a set of components of school evaluation starting from evaluation design to the
dissemination of final results to the public.
For the analysis of school evaluation policies, the concept of social validity was
employed as an analytical framework. Thus, in this study, school evaluation policies
were reviewed as a socio-cultural process toward building a consensus about the
meaning of evaluating school quality. Specifically, a central focus was whether or not
the policies were acceptable to the Korean people in general and to specific diverse
communities in Korea. Challenges and difficulties involved in the process of dialogue
among direct stakeholders were also identified.
In terms of analysis design and procedure, this study mainly relied on
document analysis. For the document analysis, first, this study considered
suggestions for a historical review of policy (Gale, 2001): the issues and problems
of the policy, the way they were being addressed, the status of the policy, and the
nature of the change from the past to the present. Second, the researchers analyzed
official documents such as annual reports of school evaluation at national level,
school evaluation reports published by local education offices, central government
documents, local education office documents, journal articles, and documents
from public hearings. Third, the researchers utilized diverse methods for validity
examination of the analysis. For instance, when the facts or contents of the school
evaluation policies were not clear, the researchers contacted administrators at the
central government and local education offices. Through either email conversations
or phone calls, more detailed and correct information was collected. For more valid

7
Juhu Kim, Juah Kim, & Hoi K. Suen

interpretations about the process and results of school evaluation implementations,


a series of panel discussions was also prepared. In particular, external experts and
researchers who helped in the development of the school evaluation policy from the
beginning were invited. Through panel discussions, a thorough review as well as
valid interpretations about the policy developments was possible.

Development of school evaluation policy in Korea

For the review of school evaluation policies in Korea, it is useful to conceptualize


five distinct time periods that represent five different stages of development (see
table 1): before 1995, 1996-1999, 2000-2003, 2004-2005, and after 2006. The year 1995,
during which the presidential committee announced their recommendation for a
school evaluation policy, was a watershed for school evaluation. The time since 1995
consists of four distinct time periods (1996-1999, 2000-2003, 2004-2005, and after 2006)
in terms of how the school evaluation policy was conceptualized and implemented.

Table 1. A short summary of school evaluation policy in Korea


Before 1995 1996 – 1999 2000 – 2003 2004 – 2005 2006 – present
System None Under construction MOE’s direct evalu- MEHRD’s indi-
ation rect evaluation

Evaluation Indirect monitoring Model development National project Redesign Holistic assess-
activities at local education & small scale pilot through qualitative period ment of school
offices study approaches system
Stake holders MOE MOE MOE + LEO MEHRD + LEO +
parents

Utilization of None Preparation for National report writ- National report


results larger scale evalua- ing, finding good writing, commu-
tion project schools nication with the
public,

No systemic school evaluation: Before 1995


Before 1995, systemic evaluation of school quality had not been a concern at
all. With a nationally-controlled education system, evaluation of school quality was
of interest to neither local schools nor local communities. Rather, the quality was
directly managed by the central government. Ironically, quality of schools was never
assessed systemically; even though quality as a desired characteristic has always
been emphasized by the government and by the Korean people.
Under the nationally-managed system, autonomy of individual schools had
been limited. For instance, the central government controlled not only teacher
recruitment, training, and placement; but also very detailed individual school level

8
School evaluation policies and systems in Korea

financial planning. A national curriculum, along with textbooks, was developed and
managed by the central government. As such, the role of school superintendents was
basically one of implementing well-designed plans by the government, rather than
creatively developing their own policies.
Under these circumstances, it was the central government that should be held
accountable for the quality of schooling. If the government wanted to examine
and evaluate the quality of schooling, they would be evaluating their own work.
Although each school was also responsible for quality, that responsibility was
relatively small compared to the one of the government (H. Kim et al., 2005).
One of the reasons as to why school quality was not assessed has something to
do with a historical concept of schooling in Korea. Traditionally, in Korean society,
school was never treated as a social unit. As such, before 1995, the idea of quality
of school was an ambiguous construct which could not be directly evaluated.
School-based management or individual school accountability were alien concepts
unconnected to the core of the education system in Korea.
Historically, quality control for the Korean educational system was not
accomplished via the assessment of school quality but through personnel evaluation.
For instance, during the Yi dynasty (1392 – 1910), the quality of schools was indirectly
managed through the evaluation of local government officers (J. Kim, 2005). At
that time, the performance of local government officers was evaluated using seven
different criteria. One of the seven criteria was the encouragement of education. If
an officer reported good evidence showing an increase in student enrollment at local
education centers, the officer obtained high ratings for his personnel evaluation.
Given this historical context, the concept of school accountability was also
moot. Since the central government directly managed the entire school system,
accountability was not a concern for individual schools. School principals simply
followed policies and directives from the central government and would not
creatively manage their schools based upon their own visions. In addition, they were
not expected to report the outcomes of schooling to the public or the parents. Thus,
prior to 1995, school level evaluation as a concern had been absent from the views of
the central government, local education offices, as well as those of teachers, parents,
and administrators.
An understanding of this historical and socio-cultural background is
critical for the understanding of some of the subsequent developments. The 1995
recommendations of the presidential commission fundamentally suggested the
imposition of quintessentially western concepts of school quality, school evaluation
and accountability onto a school system for which such concepts were alien and
incongruent.

9
Juhu Kim, Juah Kim, & Hoi K. Suen

Trial and error: 1996 – 1999

After the central government had accepted the recommendations from the
presidential committee on educational innovations, the Ministry of Education (MOE)
began in 1996 to develop a school evaluation system with assistance from KEDI.
Since KEDI did not have any prior experience in school evaluation, the first stage
of development was to review many different school evaluation models abroad. In
particular, KEDI paid attention to the school evaluation model used by the Office for
Standards in Education (OFSTED), a British national institute for school evaluation.
After finishing their theoretical review of school evaluation models, KEDI
developed a pilot evaluation model in 1999 (J. Kim, K. Choi, et al., 1999; Ryu & Kim,
1999) The main methodology behind the evaluation model was a series of qualitative
inquiries about the overall quality of schooling at each local school through
document analyses, interviews, classroom observations, and other similar methods.
A team of about 8 evaluators first reviewed various pieces of evaluation evidence
reported from an individual school. After reviewing the evidence, they collected
more direct evidence through a site visit. A final evaluation report was then prepared
by the team and sent to the school.
In coordination with the government’s efforts for the development of the school
evaluation system, LEOs also began to consider school evaluation. Since they did not
have the resources to develop sophisticated evaluation models based on any complex
theoretical framework, their approach to school evaluation tended to be simplistic.
They identified measurable school characteristics (e.g., frequency of in-service
workshops for teachers, number of instruction manuals, teacher-child ratio, number
of certificates earned by students) that they believed reflect quality of schooling and
developed checklists for these characteristics (T. Chung, J. Kim, et al., 2004a). Using
such checklists, administrators or principals reported school scores. The scoring
system tended to be simple and non-systematic.
During the election of the superintendent of each LEO, the candidates would
propose their vision of how to implement educational policies handed down from
the MOE. The main focus of individual school evaluations at the LEO level, with
their checklists and scores, were aimed at inspecting whether the superintendent’s
procedures had been implemented as planned. The use of such simple checklists
instead of more sophisticated and comprehensive methods was at least partly
due to a lack of well-trained evaluators at the local schools. As a result of using
such superficial checklists, the impact of LEO school evaluation on each LEO was
minimal. Teachers, principals, administrators, and parents did not believe that the
LEO evaluation information reflected the actual quality of the schools. Although the
results of each school evaluation were distributed to the teachers, administrators, and
parents these results were largely ignored. The local school evaluation information
was treated as a technical supplement to the report of the visitation team.

10
School evaluation policies and systems in Korea

There was an apparent discrepancy and lack of coordination between the


evaluation approach of the MOE and those of local LEOs. While the MOE used a
more comprehensive and interactive evaluation model relying on school visits, LEOs
used a more mechanical process of completing checklists of items of local interests
only. The model used by the MOE closely resembled what has been described by
Stufflebeam as the “Criticism and Connoisseurship approach” (2000, p. 29); the
approach used by the LEOs was akin to a rudimentary form of the “Management
Information Systems approach” (Stufflebeam, 2000, p. 36). This lack of concordance
was at least partly due to a lack of prior consensus about the design between the
MOE and LEOs. Although the school evaluation policy had great potential benefit,
those involved were unable to develop and implement the policy with a workable
model. Thus, from 1996 to 1999, the school evaluation system in Korea can best be
described as going through a stage of trials and errors. During these trials and errors,
considerable incongruence and discrepancies in terms of conceptual models and
purposes were quite apparent. These can be traced to the fundamental incongruence
of the historical and socio-cultural concepts of school quality and those of the MOE.
At this stage of development, a wide conceptual gap existed and social validation
was not possible.

Consensus and commitment: 2000 - 2003


With lessons learned from the trials in 1996-1999, the MOE and KEDI were
prepared for a nationwide implementation of an improved model. However, because
of limited funding, the implementation began with only 16 schools in 2000. While
these participating schools were evaluated through intensive interactive interviews,
additional information would also be collected via a mail survey from a sample
of other schools beyond the participating schools to supplement the evaluation.
Evaluators and field practitioners (i.e., principals, teachers, and administrators)
alike were excited about the implementation of this design. It was the first systemic
evaluation project focusing on the examination of school quality.
Through this evaluation project, all the evaluators and field practitioners from
the participating schools tried to learn about each other. In this process, a strong
consensus on the goal of school evaluation, especially in terms of the dynamic
interaction between evaluators and school system, emerged. Instead of criticizing
the quality of schooling or examining the school system using evaluation indexes,
they were eager to build a new, mutually acceptable system for the future of school
evaluation. The consensus was to continue to utilize the evaluation model developed
by the MOE and the KEDI during 1996-1999.
There was also a consensus among all participants to share results of the
evaluation nationally in hopes that such sharing would provide meaningful benefits
to the MOE, KEDI, and school practitioners (Yoo et al., 2001). They all agreed to
accept the necessity of school evaluation at both the national and the local level.

11
Juhu Kim, Juah Kim, & Hoi K. Suen

Through analyzing all participating schools as a system, they had the opportunity to
understand how the overall school system worked. In spite of earlier concerns from
school teachers regarding the negative impact of school evaluation, the consensus
among the participants led to a positive spirit of purpose that all involved would
work together for the reconstruction of curriculum, instructional methods, school
leadership, communicating with parents, and so forth. The participants also pointed
out that more research funds and resources should be utilized for the continuous
improvement of the evaluation model itself. In addition, a specialized research
institute for school evaluation at the national level was also proposed.
However, soon after the initiation of this consensus and commitment, negative
responses towards the evaluation system began to emerge from field practitioners.
For instance, many school teachers were not comfortable with intensive interviews
and observations that took an entire week. Some evaluators began to doubt the
effectiveness and efficiency of the qualitative methodology used. In the next three
years, the number of schools participating in the evaluation project increased
from 16 to 100 (see Table 2); partly due to budgetary constraints and partly due to
complaints about the length of visits, site visitation days were reduced from 6 to 2
days per school. Although the number of trained evaluators more than doubled from
139 to 290 by 2002, it suddenly dropped to 92 evaluators in 2003. During the same
4 years, the number of evaluators per team decreased from 13 to 3. Concomitantly,
the sample of schools from which supplemental evaluation data were collected via
a survey expanded from 108 to 756 schools. Meanwhile, as more and more schools
joined the core group of site-visit participants, less resources were allocated for these
visits.

Table 2. Implementation of school evaluation at national level in 2000-2003


2000 2001 2002 2003
Number of school 16 48 100 100
Visiting days per school 6 days 4-5 days 3-4 days 2days
Site visiting # of evaluators per team 13 6-11 6-8 3
evaluators Principal, teacher, administrator, professor
# of evaluators 139 165 290 92
# of schools 108 200 - 756
Survey
Method Mail survey Mail survey - Mail survey

While the MOE, KEDI, and the participating schools implemented the site-
visitations and surveys, all LEOs continued to pursue their own improvement of
school quality by continuing to use the evaluation indexes they had developed
before 2000. The numerical indexes were developed locally and were targeted to
specific procedures and concerns of local education offices (J. Kim, T. Chung, et al.,

12
School evaluation policies and systems in Korea

2004). In other words, unlike the national KEDI evaluation model, the evaluation
approach at the local level continued to focus on narrow numerical measures of local
concerns, rather than assessing quality of schooling through more comprehensive,
holistic observations. During 2000-2003, these two attempts ran in parallel with little
interaction or cross-referencing with one another.
In 2003, school administrators, the National Audit Center (NAC), and the newly
expanded Ministry of Education and Human Resources (MEHRD)1) concurrently
criticized the school evaluation system. The general consensus was that the
evaluation system was not contributing to the improvement of school quality. School
administrators complained about the burden of evaluation preparation, the quality of
evaluators, and the perceived lack of reliability or validity of evaluation information.
The NAC was critical of the poor cost-benefit ratio of the evaluation system. The
NAC also pointed out a partial redundancy of evaluations conducted by KEDI and
those conducted by local education offices. The MEHRD and local education offices
complained about the difficulty of obtaining meaningful information from school
evaluation results to guide accountability. Hence, the school evaluation project at the
national level was terminated at the end of 2003.
It can be observed at this stage that the two initial communities of stakeholders;
i.e., MOE and LEOs, have made great strides toward consensus and congruence.
However, new stakeholders; i.e., NAC and MEHRD, emerged with different
perspectives and expectations. This became another step in the long and complicated
process of social validation.

Redesign: 2004 - 2005


Based upon these criticisms, KEDI redesigned the system of school evaluation
during 2004-2005. The new design was to be more than a system of indexes and
reports. The main idea behind the new school evaluation system was to develop a
systemic review of school quality. From examination of school objectives to outcome
analysis, an integrated evaluation system was proposed. Additionally, the system
emphasized the utilization of the evaluation results. For the appropriate use of
school evaluation results, LEOs were strongly recommended to consider school
accountability.
For the new design, KEDI analyzed the criticisms about the previous attempts
from teachers, administrators, evaluation scholars, the MEHR and NAC. Results of
their analyses indicated that there was a large gap between the interests of national
leaders and those of field practitioners (T. Chung, J. Kim, et al., 2004a). Although the
formative feedback from site visits was welcomed by many schools, such feedback
provided little information for educational decision making at the national level.
While such site visits produced information about specific strengths and weaknesses
of the local schools at the micro-level, the information was not useful at the macro-
level to help the MEHR and educational administrators to gain an understanding

13
Juhu Kim, Juah Kim, & Hoi K. Suen

of the overall quality of schooling at the national level. So, the MEHR and
administrators suggested the development of a larger system of school monitoring,
rather than evaluating each school’s specific local qualities. They also recommended
making a strong connection between school evaluation at the national level and those
of LEOs.
KEDI also reviewed many different school evaluation models developed by
LEOs but found these models to be locally unique and to be neither systematic
nor generalizable. For example, some offices used a hodgepodge of more than 100
different indexes but had no theoretical rationale for the choice of indexes.
In order to develop Korea’s unique school evaluation model, many different
evaluation approaches used in other countries were examined by KEDI. For instance,
school evaluation systems in the United Kingdom and the United Sates were
thoroughly reviewed (Erpenbach, 2003; Gong, Blank, & Manise, 2002; Marion &
White, 2002; OFSTED, 2003a; OFSTED, 2003b; OFSTED, 2003c; OFSTED, 2004; Potts,
2002; Sammons, Josh, & Peter, 1995). The results provided a couple of insights. For
instance, since the use of student achievement data for school evaluation was not
permitted in Korea, site-visitation oriented approaches (e.g., traditional accreditation
system, qualitative interview, classroom observation) were seriously considered.
With this perspective, the OFSTED’s evaluation model was investigated in order to
combine quantitative and qualitative evaluation methods. In particular, OFSTED’s
systematic assessment approach utilizing a collection of qualitative evidence was
very helpful for the development of an integrated evaluation system combining site
visiting results and survey data.
Through the review of school evaluation systems in the US, a recent policy, No
Child Left Behind (NCLB) emphasizing student’s academic growth was also examined.
Because of limitations in the utilization of student achievement data in Korea, it
was difficult to gain meaningful insights from the NCLB policy for the redesign of
the Korean school evaluation system. However, the US approach of focusing on
continuous growth with specific objectives (e.g., AYP: Adequate Yearly Progress)
was carefully interpreted (Fast & Hebbler, 2004; Fast & Erpenbach, 2004). In addition,
utilization of the results of school evaluation (e.g., closing of schools) gave insights for
the reviewers to design a new method for the utilization of evaluation results.
After the review of school evaluation systems in Korea and in the world,
KEDI proposed a revised evaluation system. Unlike the previous school evaluation
model, the newly revised model emphasized the importance of communication
with the public, school outcomes, and policy implementation at the national level.
The main foci of the revised model were as follows. First, evaluation should provide
meaningful results for the improvement of the school system. In other words, as
Stufflebeam et al. (1971) pointed out, the main focus on evaluation should not be to
prove something, but to improve education. Thus, the main goal of school evaluation
was to get valid evidence of high-quality schooling for continuous growth. Then, the
schools should show appropriate evidence of continuous growth and improvement.

14
School evaluation policies and systems in Korea

Using the collected evidence, school evaluation is a dynamic dialogue between


evaluators and school personnel for the improvement of schooling quality.
Second, a school should be treated as a holistic system. In the new school
evaluation system, one of the foci is to holistically assess the entire system, instead
of measuring a disjoint hodgepodge of indexes of school quality. Rather, more
comprehensive, system-wide indexes covering qualitative and quantitative evidence
should be utilized. So, in terms of assessment methodology, a mix of quantitative
and qualitative approaches was considered.
Third, a systemic review of the entire school system in terms of input-process-
outcome was suggested. In particular, the outcome of schooling, which had not been
considered previously, was a very important new target of evaluation (T. Chung,
J. Kim, & J. Kim, 2004b). For instance, many different types of student outcome
variables were considered: self-directed learning, learning about learning methods,
communication skills, interpersonal relationships, self-concept, and responsibility.

Table 4. Comparison of school evaluation contents before and after 2004


Area Before 2004 New model
Goal School objectives and planning School objectives and planning
Curriculum Curriculum organization Curriculum organization
Extra curriculum Extra curriculum
Teacher’s reconstruction of curriculum

Educational Utilization of physical resources Principal’s leadership


support Utilization of human resources School organization
Staff management and development
Facilities and financial support
Relationship with parents and local communities
Achievement - Learning about learning methods
Communication skills
Interpersonal relationships
Self-identity
Responsibility

Communication - Parents’ satisfaction with schooling


with parents

Implementation of national policy: the 7th national curriculum,


Policy -
utilization of technology, performance assessment

Fourth, utilization of school evaluation results was emphasized. In the previous


school evaluation system, feedback from school evaluation teams was limited. In
order to increase the level of each school’s participation in school evaluation, the
utilization of evaluation results including awarding good schools, financial support,
and intensive consulting for poor quality schools was seriously considered. For
instance, the Choongchungnamdo Provincial Office provided school innovation
funds after reviewing individual school restructuring plans (T. Chung, J. Kim, & J.
Kim, 2005).

15
Juhu Kim, Juah Kim, & Hoi K. Suen

Finally, communication with parents and local communities was also proposed.
Information about the quality of schooling had never been open to the public in
Korea. Because of a policy of educational equity in Korea, the MEHRD and LOEs
had assumed that the quality of schooling was about the same across all schools.
However, in reality, many research studies (J. Kim, Min, & Choi, 2009, December;
S. Kim et al., 2009; Ryu et al., 2006) had reported that there are large differences
in school quality by geographic regions (e.g., urban vs. rural schools). For the
construction of partnerships among schools, parents, and local communities, open
communication about school quality was strongly recommended.
With the new direction of school evaluation, KEDI developed the School
Evaluation Common Indexes (SECI). The uniqueness of SECI lies in its efficiency
and use of holistic judgments (T. Chung, J. Kim, et al., 2004b). The core functions of
a school system were extracted through an intensive literature review. A short list of
14 key outcomes-based evaluation questions and corresponding rating scales were
prepared. For each question, detailed elements reflecting high quality schooling were
described. Additionally, these questions were written in such a way that evaluators
could figure out what characteristics were necessary to show an appropriate level
of quality. After reviewing each school’s self-evaluation report, evaluators were to
choose an appropriate score for each scale. Very detailed scoring rubrics (1: very poor
– 5: excellent) were developed and provided. After assigning ratings based on school
reports, evaluators would conduct site visits to confirm their scores. As such, the
process and judgment system was standardized and systemically managed through
the indexes.
The SECI system was validated through a pilot study in 2005. The Kyungbook
Province Office agreed to use SECI for their school evaluation as a pilot site. Through
collaboration among the MEHRD, KEDI, and the Kyungbook Province Office, 72
schools joined the pilot school evaluation project (J. Kim, T. Chung, J. Kim, & S. Kang,
2005b). About 30 evaluators were newly recruited and trained by the KEDI research
team. At the end of 2005, the results of school evaluation were opened to the public.
Based upon the results of this new school evaluation project, the remaining 15 LEOs
also adapted the SECI and began to use it in 2006.
An important effect of the criticisms by the NAC and the MEHRD in the
previous stage was that KEDI became cognizant of the importance of involvement
of more stakeholders in the design and implementation processes. The SECI was an
attempt to integrate the perspectives of the previous MOE and those of the LEOs to
arrive at a consensual system. It was also an attempt to have a system that would
satisfy the demands of the NAC and those of the MEHRD. Finally, it was recognized
that, if this system is to be socially valid, it needs to be of value to yet another
community of stakeholders: that of parents and the general interested public.

16
School evaluation policies and systems in Korea

Construction and implementation: 2006 - present


For the construction and implementation of a new school evaluation model,
the MEHRD and KEDI developed a management system. The MEHR and LEOs
discussed their distinct functions in order to avoid redundancy and inefficiency. The
main function of the MEHRD was found to be the management of the overall quality
of school evaluation system, including such systemic matters as evaluator training,
evaluation index development and validation, financial support, and electronic
database development (J. Kim, J. Chung, J. Kim, & S. Kang, 2005a). The LEOs, on the
other hand, identified their main task as the actual implementation of the new school
evaluation system.
After developing the SECI, the MEHRD invested research funds to develop a
school evaluation manual. The necessity of such a detailed evaluation manual had
been suggested by LEOs and evaluators. The need had stemmed from apparent
inconsistent ratings among raters when using the SECI. Although the SECI was
utilized throughout the entire school system in Korea, actual implementation of
the process was done at the local level. Thus, without having sufficient training in
interpreting the meaning of the indexes, local evaluators may have had difficulty
utilizing the indexes. In the manual, detailed descriptions about the meaning of each
index along with evaluation rubrics were provided.
The MEHRD also prepared evaluator training workshops every year. Through
the workshops, rating situations were simulated. Through these rating simulation
experiences, a team of evaluators learned how to reach consensus about the meaning
of a rating as well as the score level of each school.
To improve the efficiency of data collection, KEDI developed a web-based
database system. Once each school evaluation team reached a consensus on the
quality of school they were evaluating, the team leader entered the evaluation scores
into the database system. Using the collected data from different local education
offices, KEDI could easily and efficiently develop a large evaluation database. KEDI
would publish a national evaluation report every year after analyzing data from the
evaluation database.
Based upon the newly redesigned school evaluation system, 16 LEOs adapted
the SECI for their school evaluation in 2006. School evaluators at each LEO
participated in a school evaluator workshop prepared by the MEHRD and KEDI. In
addition, through a partnership among LEOs, the MEHRD, and KEDI, the results of
school evaluation were shared. Although the school evaluation information was only
partially shared with parents, it was the first time ever that the general Korean public
had access to some detailed information about school quality.
However, many of educational scholars, teacher unions, school administrators,
and superintendents are still skeptical about the value of school evaluation. They
question the consequential validity of the evaluation system. Although the contents
and processes of the evaluation system have been improved, they are not sure

17
Juhu Kim, Juah Kim, & Hoi K. Suen

whether the evaluation system really enhances the quality of schooling. In addition,
some people are even questioning whether school evaluation itself is appropriate for
the education system in Korea. Because of these reasons, not only field practitioners,
educational researchers, administrators, and parents, but also the government, do
not attach a high priority to the current school evaluation process. Thus, construction
and implementation of the school evaluation system is still on-going. It is also clear
that, even when all stakeholders are on board, it will take more negotiation of goals,
objectives, and outcomes prior to the achievement of a reasonable level of social
validity.

Critical factors influencing the school evaluation policy


After 15 years, the Korean school evaluation policy is still evolving with more
stake-holders involved. Using the newly developed evaluation index (i.e., SECI),
the school evaluation system has shifted to become a more contextual and holistic
assessment. Unfortunately, the quality of the school evaluation system is still not
satisfactory to many stakeholders, although the MEHRD and LEOs have spent a
lot of research funds and energy on it. Perhaps it is important to ask: “what are
the critical factors influencing the acceptance of the school evaluation policy?” To
answer this question, we offer the following five aspects of the evaluation system for
consideration: preparation and supply of qualified evaluators, utilization of student
achievement data, standards for judging good schools, teacher management system,
and utilization of evaluation results.

Preparation and supply of qualified evaluators


Evaluators are at the heart of the current SECI system of evaluation. The
main intention was to assess the quality of schooling after considering many
contextualized factors and conditions. In this approach, well-trained evaluators
are supposed to assess the quality through observation, interview, interaction with
students, document analysis, analysis of survey data, and so forth. Thus, it is critical
that there is an adequate supply of well-trained and qualified evaluators who can
conduct the school evaluations appropriately.
Unfortunately, neither the MEHRD nor LEOs had a sufficient number of
qualified evaluators. One of the complaints from field teachers in the process
of the school evaluation implementation was about the quality of evaluators
(T. Chung, K, Nam, et al., 2008; J. Kim, T. Chung, et al., 2005a; J. Kim, I. Min, et
al., 2009, December). Many school teachers were concerned about the quality of
evaluators because they believed that final results of school evaluation could vary
in accordance with the quality of evaluators. In a sense, educational administrators
and policy makers did not fully understand that school evaluation is not a matter of

18
School evaluation policies and systems in Korea

scoring process with evaluation index. Rather, they needed to know that numbers
(e.g., evaluation scores) do not make decisions, people do.
Different opinions arose from evaluators. Although the newly developed
evaluation model heavily relied on evaluators’ judgments, the evaluators were not
given enough time to fully conduct school evaluations. Because of the burden put on
field teachers’ for the preparation of school evaluation, the evaluators did not want
to take up too much of teachers’ time. So, they usually spent half to one day only
for site visits. These shorter visits were welcomed by teachers but the evaluators did
not have enough time to interview school teachers and students thoroughly. Thus,
the evaluators could not have sufficient information to assess the overall quality
of schooling. As a result, evaluators tended to rely on information from the paper
reports prepared by teachers. In other words, while school teachers did not want the
evaluators to stay at their schools for a long time, they also complained about the
evaluators’ judgment as being superficial due to their brief visits.
To bridge this gap, a more systematic training program for evaluators was
initiated. School evaluators were provided with two-day intensive workshops
prepared by the MEHRD and KEDI. However, since the government and LEOs
did not have enough funds for the training workshops, only selected evaluators
from each LEO joined the workshops. After the workshops, the trained evaluators
were supposed to deliver the knowledge and experiences from the workshops to
their local colleagues. This indirect train-the-trainer linkage between the training
workshops and evaluators at LEOs was not enough to help school evaluators be
ready for performing valid assessments. Thus, the improvement of evaluators’
quality would be one of the foremost priorities to improve the current school
evaluation system.

Utilization of student achievement data


Through the school evaluation process, student achievement data were never
utilized. The main reason comes from the school equalization policy which assumes
that all schools in Korea basically have similar quality. Although policy makers did not
assume that characteristics of all schools are the same across the nation, it was believed
that all students and parents could access an almost homogeneous quality of schooling.
However, in reality, there are large differences in student achievement by geographic
areas (J. Kim, I. Min, et al., 2009, December; S. Kim et al., 2009; Ryu et al., 2006).
Under the school equalization policy, the parents’ right of school selection has
been limited. Based upon the homogeneous school quality assumption, parents were
not supposed to select public schools. Rather, school selection was done by lottery.
In spite of the existence of a large gap in student achievements among schools, the
policy has been a keystone in Korea’s educational system for more than 30 years
(Kang, Kang, Kim, & Ryu, 2007). Because of this policy, schools were not compared
in terms of student achievement.

19
Juhu Kim, Juah Kim, & Hoi K. Suen

Hence, no norm-referenced information about school quality was ever collected


or interpreted. Although superintendents and administrators at LEOs knew the level
of student achievement, they were hesitant to open the information to the public.
Actually, one practical index showing the overall quality of each school at the high
school level has been the proportion of students entering highly ranked colleges. In a
sense, quality of school has been an unknown in the Korean society.
There is a fundamental conceptual and philosophical dilemma between school
evaluation and the equalization policy. The school evaluation system assumes
that each school can create their own plans and manage their own schooling
processes, and eventually reach a better position in terms of curriculum design
and implementation, instructional system change, and even student achievement.
However, under the equalization policy umbrella, that kind of autonomous approach
in each school was not permitted. Such a basic contradiction is why some Korean
researchers have questioned the feasibility of any school evaluation policy within the
Korean education system.

Standards for determining good schools


One of the assumptions behind school evaluation is that there is a certain degree
of school quality within a given definition of ‘good school.’ In other words, when it
comes to the evaluation of school quality, evaluation criteria come from a theoretical
concept assessing a complex construct (i.e., good school). Based upon the good
school concept, school evaluators used judgment criteria to determine the quality
of a school. Some criteria data were interpreted relatively by comparing the data of
a school against those of other schools; while other criteria data were interpreted
absolutely against some predetermined standard cutoff score.
Through the school evaluation implementation during 2000-2003, there was
a dilemma as to how evaluators interpreted school quality in terms of relative or
absolute standards. For instance, when the evaluators used relative interpretations,
large-size schools which have many students and teachers were likely to get higher
evaluation scores compared to small ones. Under these circumstances, schools at
small cities were hesitant to actively join school evaluation. For a similar reason,
schools located at low-income areas also got relatively lower evaluation scores.
However, when the MEHD and KEDI developed the SECI with a framework
of absolute interpretation, things changed. For instance, even small schools at low-
income areas could be judged as good schools when their school management system
showed strengths. Schools that showed continuous growth in student satisfaction,
quality of curriculum, and instructional system were also rated as good schools
although their average scores in province-level testing were lower when compared
against those of other schools. In other words, as long as a school showed good
evidence given the quality standards in the SECI, the school would get high ratings
without comparing that school’s ratings against those of others.

20
School evaluation policies and systems in Korea

One concern in this absolute interpretation was that administrators who were
very familiar with relative interpretations were skeptical about the value of such
absolute interpretations. They expected to see a simple and comparable interpretation
for their various decisions (e.g., research funds distribution, selection of innovative
schools, sending consultants to poor quality schools).
As an alternative solution, the education office at the Kyungbook Province used
a mixed model covering absolute and relative interpretations (J. Kim, T. Chung, et
al., 2005b) After collecting evaluation scores using the SECI by school evaluators,
they combined the scores with more local measurable quantitative data that can be
compared across schools (e.g., parent satisfaction score, rate of college entrance, rate
of gaining certificates). Additionally, they also looked at the growth rate of these
quantitative indicators in the last three years. Finally, the office reported a total score
after combining the SECI scores along with the collected quantitative data. When the
office used this model, many schools at low-income areas were reinterpreted as good
schools because the quality of those schools was rapidly enhanced. In contrast, some
schools reported high ratios of 4-year college entrance were re-categorized as being
at an ‘intermediate level’ of quality because the quality of these schools had not been
changed. Thus, a mixed model combining SECI scores with measurable outcomes
along with growth rate is becoming a reasonable approach.

Teacher management system

A very difficult obstacle for the school evaluation system is the existing teacher
management system in Korea. In particular, in the case of public schools, teachers are
rotated among schools every 4 or 5 years. Thus, it is very difficult to communicate
with teachers about the improvement of the quality of their current school with
any sense of continuity or growth. For instance, it was common to hear “well,
these school performances were done with the previous school principal” when
interviewing principals.
The teacher rotation system was designed in order to avoid unequal distribution
of high-quality teachers (E. Kim, 2009). It was also used to solve any imbalance in
teacher supply and demand, especially in rural areas. So, in the case of public schools,
principal’s average working period is about 2 years. Thus, under the 3-year cycle
system in school evaluation, maintaining a consistent school quality becomes difficult.
When it comes to private schools, the situation is somewhat better; but it is
still problematic. Relatively, without rotation of personnel, it would be easier to
find people who actually should be responsible for the current quality of schooling
of a particular school. However, even in this case, private schools in Korea are
also supported by the central government financially. So, except for the rotation of
personnel, other aspects of school management are still controlled by the central
government. Because of this reason, school principals at private schools are also not
permitted to independently control teachers.

21
Juhu Kim, Juah Kim, & Hoi K. Suen

Utilization of evaluation results and accountability system


The final factor is the utilization of evaluation results for improvements and
for accountability. Opening the results of school evaluation to the public has not
been successfully conducted from the beginning. Before 2006, results were shared
with school principals and teachers only, without sharing them with the general
public. After the SECI was developed and implemented in 2006, not only a national
evaluation report but also evaluation reports at LEOs were opened to the public. In
addition, the evaluation results were sent to parents and local communities.
When the evaluation results were opened to the public, teachers and principals
showed great concerns about potential misinterpretations of results (J. Kim, T.
Chung, et al., 2005b). For instance, many school principals thought that the opening
would not be fair to them because the public might interpret any deficiency as
attributable to their professional failure, when they had little power to control factors
that could influence these results. Unless the principals have an appropriate level of
rights and power in teacher recruitment, financial planning, curriculum change, and
so forth, they believe that the results of school evaluation should not be attributed to
them. In particular, in the case of poor quality schools located in low-income areas,
it has been recommended to provide financial support, including extensive school
consulting, rather than asking school principals to be responsible for the quality.
However, the Ministry of Education, Science, and Technology (MEST)2) and
parent associations keep insisting on the sharing of the evaluation results, including
student achievement data, with the public. They believe that the construction of an
accountability system in education is impossible without understanding school quality
with objective and communicable indexes. Currently, the MEST is ready to include
student achievement data as part of the school evaluation analysis. Although it would
be very difficult to report raw scores of student achievement, the MEST is trying to
share the data at a minimum level. For instance, instead of reporting achievement
test scores of each school, percentages of each school’s achievement level (e.g., high,
middle, low level) would be opened. If the achievement data can be opened and
utilized, the information also could be utilized by school evaluators. If the achievement
data could be combined with the results of school evaluation, it would be a great step
forward toward the construction of an accountability system in Korea.
For a long time, accountability has not been a major controversial concern
in the area of education in Korea. Korean people believed that in education, any
accountability related actions and activities could be exempted because education
should not be treated like a business entity. It is a sacred institution that ensures the
transmission of proper values and behaviors of central importance to the Korean
society to the next generation. As such, concepts such as cost-benefit analyses were
considered fundamentally unacceptable within the realm of education. In addition,
because of a long historical background in Confucianism, the Korean society has held
education as a sacred profession. Thus, nobody would imagine asking teachers, with

22
School evaluation policies and systems in Korea

their high honorable social status, to show evidence of the quality of their teaching
(Hwang et al., 1997).
However, in spite of these long traditional views, the perspectives of the
Korean people are gradually changing recently. Taxpayers have begun to ask the
government to examine the accountability of the education system. They believe that
education is not a sacred area anymore and should be treated like any other social
institutions. So, teachers are also expected to provide reasonable evidence showing
that taxpayers’ money is being spent appropriately and meaningfully. In addition,
the school system itself should be held accountable through active communication
with the general public outside of education. For these demands, valid and reliable
school evaluation, along with an easy-to-understand reporting system, is expected.

Observations and discussions


The purpose of this study was to review the school evaluation policy and system
in Korea in terms of their social validity. Specifically, the researchers tried to investigate
the historical contexts of the school evaluation policy development, the school
evaluation system design, and critical factors contributing to the implementation of
the policy. The results showed that the school evaluation policy has been evolving
continuously. Although the central government and LEOs have spent much time and
energy, implementation has been slow. The slow process might be attributed to the
unfamiliarity with the concept of school evaluation within Korean society.
Historically, Korean people are not familiar with the concept of evaluating
school quality. Within the sociocultural context of Korean education, school quality
has not been treated as a construct which can be systemically assessed. Thus, since
the presidential committee suggested the school evaluation policy, it has not been
easy to reach a consensus about the meaning of school evaluation. Actually, the
concept of school evaluation was alien and incongruent to Korean people. The
poor consensus among stakeholders (e.g., MOE, MHERD, NAC, field practitioners,
parents) became a critical factor affecting the entire process of school evaluation
policy development and implementation.
Along with the historical background of school evaluation policy, one of the
factors that should be seriously considered before implementing school evaluation
policy was that of school autonomy. That is, for a valid implementation of school
evaluation policy, each school needs to have its own vision toward the improvement
of school quality. However, because of the long historical roots of a centralized
education system, this primary condition has not been provided, or at least not
perceived as being provided. In addition, there was no thorough consideration about
some other pre-existing policies (e.g., equalization policy, teacher rotation) that
could have conflicts with the school evaluation policy. As a result, the main focus of
school evaluation policy has remained as a sort of monitoring system checking the

23
Juhu Kim, Juah Kim, & Hoi K. Suen

continuous improvement of school quality rather than a systemic approach toward a


sound accountability system.
In terms of evaluation approaches and methods, there was also a lack of unity
of purpose between the central government and LEOs. During 1996-1999, the central
government concentrated on constructing a nation-wide system to provide formative
evaluation feedback to schools. In contrast, the LEOs designed and implemented
their own summative evaluation measures. The formative-summative gap between
the interests of the MOE and those of LEOs was partially due to the use of two
different evaluation methods: qualitative vs. quantitative. The MOE site-visitation
system was designed specifically to obtain formative evaluation information for the
schools and LEOs. The MOE’s main interest was a continuous dialogue between
each school and the evaluation team with a goal of continuous quality improvement
through the information obtained via qualitative methods, such as in-depth
interviews and observations.
The LEOs on the other hand wanted to develop relatively quick quantitative
measures. These measures tend to be more limited in scope and depth, often
superficial, and are useful mostly as part of a summative evaluation system. This lack
of unity of purpose between the MOE and LEO’s led to many criticisms from various
stake-holders and the national effort was terminated in 2003. Such criticisms were
healthy and were in fact part of an informal on-going meta-evaluation of the policy.
During this initial trial-and-error period, a number of unintended consequences
emerged, including: a lack of interaction between the MOE and LEOs, complaints
of poor quality of evaluation, and complaints of undue work burden on field
practitioners. The objective of accountability and the use of student achievement data
were largely ignored.
During the subsequent redesign period (2004-2005), national interests, LEOs’
and field practitioners’ requests, and parents’ needs were all taken into account.
A set of nationally validated evaluation indexes, a system of technical support,
and evaluator training workshops were developed. The newly designed system
was implemented in 2006 in 16 LEOs. For accountability, the provinces opened the
results of school evaluation to the public for the first time in 2005 and the reports
were shared with parents. Thus, several problems and issues raised in the previous
approach were partially resolved. The development in the last 15 years can be
understood as a slow, arduous but steady process of the social validation toward a
consensual school evaluation policy.
The overall school evaluation policy is still facing several problems, including
the lack of utilization of achievement data, the need to further improve the quality
of evaluators’ judgment, and the need to continuously validate the SECI. More
fundamentally, the question remains as to whether a school evaluation system is
compatible with the Korean culture, with its long history of equalization policy based
on Confucian precepts regarding education.
Despite such problems, social consensus is emerging that the evaluation

24
School evaluation policies and systems in Korea

of school quality is an important component of school accountability, which is


an aspect of education increasingly demanded by tax-payers. Without having a
continuous dialogue among policy makers, filed practitioners, and parents, it would
be very difficult to construct a sound system to meet the increasing demands from
more diverse stake-holders. Based upon this consensus, the central government is
considering a web-based system reporting the results of school evaluation along with
student achievement data.
In terms of the social validation of school evaluation policy, an understanding
about the quality of schooling is evolving through continuous dialogue among
diverse stakeholders. In particular, not only traditional direct stakeholders (e.g., the
central government, education offices, administrators, field practitioners), but also
parents, have joined the discussion about the future of school quality. In addition,
based upon the lessons learned from the implementation and results of school
evaluation, schools are asked to be more responsible for their educational services.
In this direction, the school management system is slowly moving from the
central government-oriented approach to a decentralized school-based approach.
Therefore, social validation of school evaluation policy in Korea is an on-going
process.

1) A s of 2001, Jan., Ministry of Education (MOE) changed to Ministry of Education and Human Resources
Development (MEHRD).
2) As of 2008, the Ministry of Education and Human Resources Development has been changed to the Ministry of
Education, Science, and Technology.

Address for correspondence


Juhu Kim
Associate Professor of Education
Graduate School of Education, Ajou University
Kyungki-do Youngtong-Ku Woncheon-dong
Suwon, 443-749, Korea
Tel: 82 31 219 1793
Email: juhu@ajou.ac.kr

25
Juhu Kim, Juah Kim, & Hoi K. Suen

Reference
Chung, T., Kim, J., & Kim, J. (2004a). A study on the development of comprehensive school
evaluation model. Seoul: Korean Educational Development Institute.
Chung, T., Kim, J., & Kim, J. (2004b). Comprehensive indexes for school evaluation. Seoul:
Korean Educational Development Institute.
Chung, T., Kim, J., & Kim, J. (2005). A report on the results of tailored school evaluation
for primary and secondary schools at ChungChungnamdo province. Seoul: Korean
Educational Development Institute.
Chung, T., Kim, J., Lee, T., & Kim, J. (2005). Making a linkage between school evaluation
and supervision. Seoul: Korean Educational Development Institute.
Chung, T., Nam, K., & Kim, J. (2008). Annual report of comprehensive school evaluation.
Seoul: Korean Educational Development Institute.
Dellinger, A. B., & Leech, N. L. (2007). Toward a unified validation framework in
mixed methods research. Journal of Mixed Methods Research, 1(4), 309-332.
Erpenbach, W. (2003). Statewide educational accountability under NCLB. Washington,
DC: The Council of Chief State School Officers.
Fast, E. F., & Erpenbach, W. J. (2004). Revisiting statewide educational accountability
under NCLB. Washington, DC: The Council of Chief State School Officers.
Fast, E. F., & Hebbler, S. (2004). A framework for examining validity in state
accountability systems, Washington, DC: The Council of Chief State School
Officers.
Fawcett, S. B. (1991). Social validity: A note on methodology. Journal of Applied
Behavior Analysis, 24, 235-239.
Foster, S. L., & Mash, E. J. (1999). Assessing social validity in clinical treatment
research issues and procedures. Journal of Consulting and Clinical Psychology, 67,
308-319.
Gale, T. (2001). Critical policy sociology: Historiography, archaeology and genealolgy
as methods of policy analysis. Journal of Education Policy, 16(5), 379-393.
Gong, B., Blank, R. K., & Manise, J. G. (2002). Designing school accountability systems.
Washington, DC: The Council of Chief State School Officers.
Hwang, J, Yoon, J., Kang, Y, Oh, S., Kim, K., & Kim, C. (1997). Development of school
evaluation procedure and criteria for primary and secondary schools. Seoul: Seoul
National University Education Research Center.
Jeong , S., Yoo, K., Kim, M., Kim, J., & Kim, B. (2003). Strategic plans for development of
school evaluation. Seoul: Korean Educational Development Institute
Kang, Y., Kang, S., Kim, S., & Ryu, H. (2007). High school equalization policy: The reality
and myth. Seoul: Korean Educational Development Institute.
Kim, B. (2004). An analytic review of national-level school evaluation. Korean Journal
of Korean Education, 31(2), 219-244.
Kim, E. (2009). Teacher policy: procurement and disposition of qualitative teachers. Seoul:
Korean Educational Development Institute.

26
School evaluation policies and systems in Korea

Kim, H., Park, J., Yang, S., Kim, E., Jang, S., Lee, T., et al. (2005). A study on the
innovative educational administration system for the construction of school-based
management. Seoul: Korean Educational Development Institute.
Kim, J. (2004). A qualitative school evaluation model for the improvement of
schooling through mutual information exchange. Korean Journal of Educational
Research, 39(1), 217-248.
Kim, J. (2005). Is educational innovation possi ble through school evaluation?
Educational Development, 32(2), 92-98
Kim, J., Choi, K., Ryu, B. & Lee, J. (1999). Development of school evaluation model for
vocational high schools. Seoul: Korean Educational Development Institute.
Kim, J., Chung, T., Kim, J., & Kim, S. (2004). Investigation of school evaluation
systems at municipal and provincial offices of education. Korean Journal of
Education Evaluation, 17(2), 237-257.
Kim, J., Chung, T., Kim, J., & Kang, S. (2005a). Annual report of comprehensive school
evaluation. Seoul: Korean Educational Development Institute.
Kim, J., Chung, T., Kim, J., & Kang, S. (2005b). School evaluation report of vocational high
schools at Kyoungbuk province. Seoul: Korean Educational Development Institute.
Kim, J., Min, I., & Choi, P. (2009, December). Analysis of K-SAT achievement gap
by geographical regions in college scholastic aptitude test. Paper presented at the
symposium on the analysis of college scholastic aptitude test and achievement
test results at national level, Seoul, Korea.
Kim, S., Chung, T., & Kim, J. (2009). Analysis of the current status and outcomes of school
evaluation. Seoul: Korean Educational Development Institute.
Kim, Y., Kim, H., Yoon, J., Kim, J., & Huh, S. (2000). Field investigation of the current
issues on education. Seoul: Korean Educational Development Institute.
Kramer, J. M. (2011). Using mixed methods to establish the social validity of a self-
report assessment: An illustration using the Child Occupational Self-assessment
(COSA). Journal of Mixed Methods Research, 5(1), 52-76
Lee, I., Kim, Y., & Lee, H. (1999). Developing a national evaluation framework for the
primary and secondary schools. Seoul: Korean Educational Development Institute
Marion, S., & White, C. (2002). Making valid and reliable decisions in determining adequate
yearly progress. Washington, DC : The Council of Chief State School Officers.
Messick, S. (1995). Validity of psychological assessment: Validation of inferences
from persons’ responses and performances as scientific inquiry into score
meaning. American Psychologist, 50, 741-749.
Moss, P. A., Girard, B. J., & Haniford, L. C. (2006). Validity in educational assessment.
Review of Research in Education, 30, 109-162.
OFSTED. (2003a). Framework for inspecting schools. London: Office for Standards in
Education. HMI 1525
OFSTED. (2003b). Handbook for inspecting nursery and primary schools. London: Office
for Standards in Education. HMI 1359.

27
Juhu Kim, Juah Kim, & Hoi K. Suen

OFSTED. (2003c).Handbook for inspecting secondary schools. London: Office for


Standards in Education. HMI 1359.
OFSTED. (2004) The Future of Inspection : A consultation paper. London: Office for
Standards in Education. HMI 2057.
Onwuegbuzie, A. J., & Johnson, R. B. (2006). The validity issue in mixed research.
Research in the Schools, 13, 48-63.
Park, G. Y. (2009). Critical analysis of MB government’s higher education policy.
Trends and Outlook, 77, 50-75.
Patton, M. Q. (3rd ed.). (2002). Qualitative research & evaluation methods. Thousand
Oaks, CA: Sage Publications.
Potts, A. (2002). Key State Education Policies on PK-12 Educatioin. Washington, DC: The
Council of Chief State School Officers.
Ryu, B., & Kim, J. (1999). Development of a school evaluation model for general high school.
Seoul: Korean Educational Development Institute.
Ryu, B., Lee, K., Choi, E., Han, S., Park, J., Son, W., et al. (2006). Analysis of achievement
gap by geographical regions and development of achievement gap index. Seoul:
Ministry of Education, Science, and Technology.
Sammons, P., Josh, H., & Peter, M. (1995). Key characteristics of effective schools: a review
of school effectiveness research. London: Office for Standard in Education.
Stufflebeam, D. L., Foley, W. J., Gephart, W. J., Hammond, L. R., Merriman, H. O.,
& Provus, M. M. (1971). Educational evaluation and decision-making in education.
Itasca, IL: Peacock.
Stufflebeam, L. S. (2000). Evaluation models. New directions for evaluation, no. 89. SF:
Jossey-Bass.
Yoo, K., Kim, D., Kim, M., Kim, J., Shin, S., Lee, Y., et al. (2001). Annual report
of comprehensive school evaluation. Seoul: Korean Educational Development
Institute.
Wolf, M. M. (1978). Social validity: The case for subjective measurement, or, how
applied behavior analysisis finding its heart. Journal of Applied Behavior Analysis,
11, 203-214.

28

You might also like