You are on page 1of 8

Assessment in Education: Principles, Policy & Practice

ISSN: 0969-594X (Print) 1465-329X (Online) Journal homepage: https://www.tandfonline.com/loi/caie20

Educational Assessment in Latin America

Sue Swaffield & Sally Thomas

To cite this article: Sue Swaffield & Sally Thomas (2016) Educational Assessment in
Latin America, Assessment in Education: Principles, Policy & Practice, 23:1, 1-7, DOI:
10.1080/0969594X.2016.1119519

To link to this article: https://doi.org/10.1080/0969594X.2016.1119519

Published online: 25 Jan 2016.

Submit your article to this journal

Article views: 3190

View related articles

View Crossmark data

Citing articles: 4 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=caie20
Assessment in Education: Principles, Policy & Practice, 2016
Vol. 23, No. 1, 1–7, http://dx.doi.org/10.1080/0969594X.2016.1119519

EDITORIAL
Educational Assessment in Latin America

Assessment in Education is an international journal publishing articles relating to


educational assessment in all parts of the world. As with many English language
academic journals, articles from the western world are dominant, and understandably
geographical regions are unevenly represented. This special issue focusing on Latin
America goes some way to addressing this situation by encouraging contributions
from the region. It follows other special issues featuring specific geographical areas,
including China (Volume 21, Issue 3), sub-Saharan Africa (Volume 20, Issue 4) and
Asia Pacific (Volume 16, Issue 3 and Volume 13, Issue 2), in line with the journal’s
aims. The editorial in the inaugural issue of Assessment in Education published in
1994 drew attention to the need for assessment developments ‘to be explored in a
range of different educational contexts and national systems’ (pp.7–8).
A number of articles relating to educational assessment in Latin America have
and will be published in other issues, and complement those gathered together here.
Indeed, space limitations and other considerations mean that some current articles
concerning the region are being published in regular issues (e.g. Helman,
Delbridge, Parker, Arnal, & Mödinger, 2015). The journal has also carried a review
of international science assessment in developing countries that included reference
to Chile, Colombia and Mexico (Johnson, 1999), a paper tracing the expansion of
testing in Latin America (Schiefelbein & Schiefelbein, 2003), and profiles of the
educational assessment systems in Argentina (Gvirtz & Larripa, 2004) and Chile
(Meckes & Carrasco, 2010).
This special issue includes three articles focused on Chile, two on Brazil, and
one each on Peru and Mexico. The predominance of Chile perhaps reflects its early
highly centralised education system, and the fact that it adopted national testing
before other Latin American countries (Meckes & Carrasco, 2010). Chile was one
of only three developing countries (the other two being India and Thailand) that
fully participated in the first science survey by the International Association for the
Evaluation of Educational Achievement in the early 1970s (Johnson, 1999). Chile
also participated in all the language surveys. Referring to Husén and Postlethwaite
(1996), Sandra Johnson points out the logistical and financial difficulties for devel-
oping countries participating in international surveys of this kind, which may
account for limited and later take-up by others. Following Chile, Mexico and
Colombia first engaged with international tests through the Third International
Mathematics and Science Surveys (Johnson, 1999). Schiefelbein and Schiefelbein’s
(2003) analysis of the expansion of testing in Latin America points out the impor-
tant role educational testing has played in the development of education in the
region. Five of the seven articles take what could be broadly considered as testing
and effectiveness perspectives. The exceptions are Gysling’s (2015) article tracing
the historical development of educational assessments in Chile, and Jensen and

© 2016 Taylor & Francis


2 Editorial

colleagues’ (2015) paper that considers opportunities to learn in Mexican


classrooms. These two papers respectively head and follow the others.
The opening article, ‘The historical development of educational assessment in
Chile: 1840–2014’(Gysling, 2015) provides a fascinating analysis of assessment in
a country that has experienced extremes of centralisation and decentralisation, as
well as providing a contextual backdrop for the two other Chilean papers. It consid-
ers both teachers’ ongoing assessments and national assessments conducted at the
ends of years, as regulated by a succession of 24 state documents published from
1843 onwards. Jacqueline Gysling used a framework of five topics: unit assessed;
purpose; content; mode; and assessor, and identified six distinctive periods match-
ing different definitions of assessment. For over two hundred years the state has
used assessment as a policy tool, although the mode and purpose of assessment
have changed. While the particularities of Chile’s political history that have shaped
its education system are unique, the move away from assessments focused on indi-
vidual students’ outcomes and their futures, to assessments to evaluate schools in a
market-driven logic of improvement is not. Gysling’s analysis leads her to conclude
that educational decentralisation in Chile began in the 1960s, some 20 years earlier
than the generally accepted pattern across Latin America. It was also in the 1960s
that the first national tests were introduced, eventually leading to the establishment
in 1988 of the ‘Sistema de Medición de al Calidad de la Educación’ or SIMCE.
This is the Education Quality Measurement System, using multiple-choice tests
taken by all pupils in various grades throughout the country, still in operation
today. This well-established comprehensive system annually tests hundreds of thou-
sands of children of varied ages on their knowledge in language, mathematics,
social and natural sciences. Large rich data-sets are generated that researchers anal-
yse for different purposes: the next two papers in this issue use SIMCE test results.
Bernadita Muñoz-Chereau and Sally Thomas’ paper ‘Educational effectiveness
in Chilean secondary education’ compares ‘different ‘value added (VA)’ approaches
to evaluate schools’ (Muñoz-Chereau & Thomas, 2015). In the first part of the
paper, the authors review four approaches to measuring school performance and
effectiveness (raw attainment data, contextualised attainment (CA), VA and contex-
tualised value added (CVA)), referring particularly to studies carried out in Latin
America and the Caribbean. As well as providing the theoretical basis for the speci-
fic analysis that follows, this gives a valuable overview of student outcomes and of
the state of associated research in the region. The specific study reported in the
paper used a sample of 177,461 secondary school students, matching their 10th
grade language and maths test results with their results two years previously and
with family characteristics. Four models (Raw, CA, VA and CVA) were applied
separately with the latter three adjusted for a number of explanatory variables at
different levels, allowing for the comparison of results achieved with the various
models. The study confirmed previous similar research demonstrating that CVA
modelling is the best of the four approaches for estimating school effects and there-
fore the most appropriate method for holding schools accountable for their students’
results. The major finding though is the substantial and statistically significant effect
of not only the school level, but also class and municipality. Muñoz-Chereau and
Thomas point out that bringing in these extra levels to the CVA analysis provides a
more sophisticated and rigorous model, without which substantial numbers of
Chilean schools (25% when considering language and 45% for maths) may be mis-
classified in terms of performance. This is likely to have very serious consequences
Assessment in Education: Principles, Policy & Practice 3

for some schools (and consequently pupils, parents and teachers) in Chile where
the current high-stakes accountability system uses SIMCE results to classify schools
as ‘good’, ‘satisfactory’, ‘fair’ or ‘poor’, but without giving due consideration to
contextual and compositional factors. The issue relates not only to Chile and other
Latin American countries (Cervini, 2009; Thomas, Salim, Muñoz-Chereau, & Peng,
2012) but to countries all around the world where major decisions are made on
analyses of student outcomes that as this paper demonstrates are open to question.
The third paper from Chile is by Sandy Taut and colleagues ‘Teacher performance
and student learning: Linking evidence from two national assessment programmes’
(Taut et al., 2015). Similar to the previous paper, the research employs longitudinal
SIMCE data from the national assessment system, but in this case uses it to explore
the validity of the Chilean teacher evaluation programme. Sophisticated techniques
(multilevel modelling) are employed to investigate the relationship between teachers’
evaluation results and their students’ VA progress between grades 8 and 10. The
measures of teaching quality are based on detailed evidence including a portfolio of
teacher’s classroom work and a video-taped lesson, a peer interview, supervisor
assessments, and a self-assessment. The findings generally indicate a positive rela-
tionship between value-added estimates of teacher effects and teaching quality mea-
sures, although some correlations were weak. This evidence is useful to inform
evidence-based policy as it demonstrates, at least tentatively, the value of teaching
practice measures and how these relate to student progress. Opportunities to conduct
this kind of empirical study to explore the impact on student outcomes of teachers’
professional development and competencies are rare, due to the lack of relevant infor-
mation collected as large-scale data-sets (Timperley, Wilson, Barrar, & Fung, 2007).
So, the findings are important not just for Chile and Latin American regions but also
worldwide, in terms of modelling and evidencing the relationship between teacher
development factors and student outcomes.
The next paper shifts the research context to a neighbouring Latin American coun-
try, Brazil. Soares, Gonzaga Alves, and Xaviera (2015) paper ‘Effects of Brazilian
schools on student learning’ investigates the effect of Brazilian elementary schools on
the probability of their students achieving three categories mathematics proficiency
(Insufficient, Basic and Proficient). A hierarchical multinomial model is used to
examine the influence of various school-level factors including infrastructure (e.g.
maintenance of school buildings, computing and library resources) and homework
policy, on students’ mathematics proficiency. The key findings suggest school infras-
tructure is associated with students being assessed at the Basic proficiency level, and
homework policy is associated with students being assessed as Proficient in mathe-
matics. Importantly, the authors acknowledge there are methodological limitations to
the study, for instance the student assessment data is cross-sectional and not longitu-
dinal; therefore, progress in achieving proficiency cannot be evaluated. Also, more
sophistication was possible in the statistical techniques employed. Nevertheless, the
paper usefully contributes new, if exploratory, findings in the Brazilian context where
such studies are few and far between, and points the way to further research on
educational effects using large-scale nationally representative data-sets.
Another perspective on the same country is provided by Christine Paget in her
paper ‘Brazilian national assessment data and educational policy: an empirical illus-
tration’ (Paget, 2015). It continues the focus on national testing and effectiveness
analyses, in this case linking resource allocation to student performance in one
region of Brazil. The ‘Prova Brasil’ assessment system was established in 2005 to
4 Editorial

monitor educational quality through biannual testing of a sample of middle school


pupils in Portuguese and mathematics. Individual students are not identified, so stu-
dent-level data are cross-sectional, whereas school-level performance can also be
tracked over time. Alongside the test administration, socio-economic data are gath-
ered through four background surveys of students, teachers, principals and schools.
The study focused on the disadvantaged state of Paraiba as the context for explor-
ing one of the purposes of Prova Brasil – that of informing resource allocation
decisions. Specifically, multilevel modelling was used to investigate the extent to
which school resources may predict student attainment on the tests, and found that
infrastructure and academic resources significantly predicted performance in Por-
tuguese and mathematics. In discussing the results Paget acknowledges limitations
of the study but draws attention to differences between research findings about the
relationship between resources and outcomes in the developing and developed
world. Further research to inform the wise allocation of resources in varied
complex contexts around the world is indeed warranted. Paget’s paper also includes
one of the few English language descriptions and critiques of the Prova Brasil
system – a valuable contribution in itself.
The final paper focusing on educational effectiveness issues draws on data from
another Latin American location – Peru: ‘Classroom composition and its associa-
tion with students’ achievement and socioemotional characteristics in Peru’ (Cueto,
León, & Miranda., 2015). The study addresses a key debate in education across the
world, the influence of school and classroom socio-economic context on student
outcomes, and provides relevant evidence in a new country context to contrast
against previous research such as Timmermans and Thomas (2014). Specifically,
the research employs longitudinal data from the Young Lives study (2015) to calcu-
late the proportion of students in a class with a parent who completed at least sec-
ondary education, as a measure of classroom socio-economic composition across
schools in Peru. The findings suggest that there are high levels of student segrega-
tion, especially at the extremes of the distribution and that classroom composition
was positively associated with achievement in mathematics and sense of belonging
of students, and not associated with reading achievement or perception of security
at school. Not surprisingly, the authors conclude that school and classroom socio-
economic composition is a relevant variable to include in descriptions of national
educational systems in Latin America and as a variable to contextualise the
achievement among students.
The five papers just discussed above are all readily identifiable in the
established tradition of educational effectiveness studies. The final paper in this
issue, ‘Framing and assessing classroom opportunity to learn: the case of Mexico’
(Jensen, Martinez, & Escobar, 2015), is also concerned with factors that influence
student outcome but differs in that it is a conceptual piece, synthesising theory and
research. The authors focus on the classroom level, citing Willms (2006) and
Riddell (2008) as well as work conducted solely in Mexico emphasising the impor-
tance of in-school variation. Specifically, Bryant Jensen and his colleagues are con-
cerned with students’ opportunities to learn (OTL) in classrooms, in particular what
constitutes OTL, how to assess OTL, and how to address cultural and contextual
differences. They identify three core elements of classroom OTL, described as
instructional time, generic quality and local quality. These are characterised as
‘how much’ opportunity is provided, ‘how well’ the learning opportunities are
delivered, and ‘how meaningfully’ the opportunities are instantiated. Each element
Assessment in Education: Principles, Policy & Practice 5

is elaborated with associated factors and variables, presented not as an exhaustive


list but as a prompt to further work in the area. They describe themselves as taking
a pragmatic perspective, integrating theoretical and empirical strengths from both
positivist and interpretivist approaches. Their work speaks to both researchers seek-
ing to develop valid and reliable measures of key classroom processes, and to prac-
titioners (teachers and professional development leaders) trying directly to improve
the quality of learning in classrooms. The detailed conceptualisation of classroom
opportunities to learn as a precursor to instrument development has relevance for
developed and developing countries worldwide. Jensen, Martinez and Escobar are
however particularly concerned with improving opportunities to learn for underpriv-
ileged children in Mexico, seeing the quality elements of their framework as
enabling the interweaving of local features with generic dimensions. Their summary
of education policy in Mexico is an additional contribution to the regional knowl-
edge base.
In terms of overall trends, the five papers that address substantive and method-
ological issues related to testing and educational effectiveness, demonstrate a grow-
ing academic field on these topics in Latin America as well as illustrating high-
level regional technical expertise in quantitative research methods. Collectively, the
papers add significantly to the global evidence base on school and teacher effects
and, as Chapman et al. (2015) have noted, there is an urgent need to use such
evidence to address equity issues in educational quality.
The data-sets analysed in the studies also indicate the ongoing rapid increase in
the extent of large-scale testing being conducted in Latin American countries.
Vegas and Petrow’s (2008) regional review identified 19 Latin American countries
as having conducted some form of national assessment, while the historical and
contextual analyses included in this issue’s papers reinforce the point. A specific
issue relates to the need to construct longitudinal data-sets, rather than just cross-
sectional, in order to explore more valid and sophisticated VA measures of stu-
dents’ relative progress. This requires the systematic implementation of unique stu-
dent identifiers to match records over time and would allow more sensitive
evaluation of the factors contributing to student learning. However, even though
national assessment data are very useful for research and possibly for policy pur-
poses, if they are generated through high-stakes tests there are implications to be
considered. An earlier special issue of this journal addressed the value, fairness and
consequences of high-stakes testing (Stobart & Eggen, 2012), while Harlen and
Deakin Crick’s (2002) review has important messages about the impact of testing
on students’ motivation for learning. Berliner’s (2011) analysis of the harm that
follows curriculum narrowing as a rational response to high-stakes testing identifies
negative consequences not only for students but also teachers and national
economies.
International testing is also evident in Latin America: 15 countries participated
in Unesco’s Third Regional Comparative and Explanatory Study (TERCE) in 2015,
part of The Latin American Laboratory for Assessment of the Quality of Education
(LLECE). Only eight Latin American countries participated in the most recent
OECD PISA 2013, possibly due to concerns about the methodology and the appli-
cability of the PISA test to the contexts of developing countries (BBC., 2015;
Postlethwaite, 2004). Interesting though, three of the six countries participating in
the current PISA for Development project are from Latin America – Ecuador,
Guatemala and Paraguay (OECD, 2015). It remains to be seen whether there is an
6 Editorial

increased participation in the PISA programme by Latin American countries, or


whether concerns about negative consequences such as those raised by 100
academics from around the world (Andrews et al., 2014) hold sway.
Whatever the future direction of national and international testing in Latin
America, there are millions of students whose education is shaped by historical
context, policies influenced by effectiveness studies, and by the everyday reality of
the opportunities to learn experienced in the classroom. The articles in this special
issue reveal a little of the assessment research, issues and context of the region,
and it is hoped that future contributions to the journal expand the knowledge base
still further.

References
Andrews, P., Atkinson, L., Ball, S. J., Barber, M., Beckett, L., Berardi, J., ... Zhao, Y.
(2014, May 6). OECD and PISA tests are damaging education worldwide – academics.
The Guardian. Retrieved January 15, 2015, from http://www.theguardian.com/education/
2014/may/06/oecd-pisa-tests-damaging-education-academics.
BBC. (2015). Latin America’s wake-up call on global school tests. Retrieved October 10,
2015, from http://www.bbc.co.uk/news/business-32161854
Berliner, D. (2011). Rational responses to high stakes testing: The case of curriculum nar-
rowing and the harm that follows. Cambridge Journal of Education, 41, 287–302.
doi:10.1080/0305764X.2011.607151.
Cervini, R. (2009). Class, school, municipal, and state effects on mathematics achievement
in Argentina: A multilevel analysis. School Effectiveness and School Improvement, 20,
319–340.
Chapman, C., Reynolds, D., Muijs, D., Sammons, P., Stringfield, S., & Teddlie, C.(2015).
Educational effectiveness and improvement research and practice: The emergence of the
discipline. Chapter 1 In C. Chapman, D. Muijs, D. Reynolds, P. Sammons, & C. Teddlie
(Eds.), The international handbook of educational effectiveness: Research, policy and
practice (pp. 1–24). London: Routledge.
Cueto, S., León, J., & Miranda., A. (2015). Classroom composition and its association with
students′ achievement and socioemotional characteristics in Peru. Assessment in
Education: Principles, Policy & Practice. doi:10.1080/0969594X.2015.1105783.
Gvirtz, S., & Larripa, S. (2004). National evaluation system in Argentina: Problematic pre-
sent and uncertain future. Assessment in Education: Principles, Policy & Practice, 11,
349–364. doi:10.1080/0969594042000304636.
Gysling, J. (2015). The historical development of educational assessment in Chile:
1810–2014. Assessment in Education: Principles, Policy & Practice. doi:10.1080/
0969594X.2015.1046812.
Harlen, W., & Deakin Crick, R. (2002). A systematic review of the impact of summative
assessment and tests on students’ motivation for learning. Retrieved October 10, 2015,
from https://eppi.ioe.ac.uk/cms/LinkClick.aspx?fileticket=Pbyl1CdsDJU%3D&tabid=
108&mid=1003
Helman, L., Delbridge, A., Parker, D., Arnal, M., & Mödinger, L. J. (2015). Measuring
Spanish orthographic development in private, public and subsidised schools in Chile.
Assessment in Education: Principles, Policy & Practice. doi:10.1080/0969594X.2015.
1038217.
Husén, T., & Postlethwaite, T. N. (1996). A brief history of the International Association for
the Evaluation of Educational Achievement (IEA). Assessment in Education: Principles,
Policy & Practice, 3, 129–142.
Jensen, B., Martinez, M., & Escobar, A. (2015). Framing and assessing classroom opportu-
nity to learn: The case of Mexico. Assessment in Education: Principles, Policy &
Practice. doi:10.1080/0969594X.2015.1111192.
Johnson, S. (1999). International Association for the Evaluation of Educational Achievement
Science Assessment in Developing Countries. Assessment in Education: Principles,
Policy & Practice, 6, 57–73. doi:10.1080/09695949992991.
Assessment in Education: Principles, Policy & Practice 7

Meckes, L., & Carrasco, R. (2010). Two decades of SIMCE: An overview of the national
assessment system in Chile. Assessment in Education: Principles, Policy & Practice, 17,
233–248. doi:10.1080/09695941003696214.
Muñoz-Chereau, B., & Thomas, S. M. (2015). Educational effectiveness in Chilean
secondary education: Comparing different ‘value added’ approaches to evaluate schools.
Assessment in Education: Principles, Policy & Practice. doi:10.1080/0969594X.2015.
1066307.
OECD. (2015). PISA for development–participating countries. Retrieved November 5, 2015,
from http://www.oecd.org/pisa/aboutpisa/pisa-for-development-participating-countries.htm
Paget, C. (2015). Brazilian national assessment data and educational policy: An empirical
illustration. Assessment in Education: Principles, Policy & Practice. doi:10.1080/
0969594X.2015.1113929.
Postlethwaite, N.T. (2004). What do international assessment studies tell us about the quality
of school systems? Paper commissioned for the EFA Global monitoring report 2005:
The quality imperative, Paris: UNESCO. Retrieved January 18, 2015, from http://portal.
unesco.org/education/es/file_download.php/248
da74f90b516176353b954962b5980Postlethwaite.DOC
Riddell, A. (2008). Factors influencing educational quality and effectiveness in developing
countries: A review of research. Berlin, Germany: German Federal Ministry for Eco-
nomic Cooperation and Development.
Schiefelbein, E., & Schiefelbein, P. (2003). From screening to improving quality: The case
of Latin America. Assessment in Education: Principles, Policy & Practice, 10, 141–159.
doi:10.1080/0969594032000121252.
Soares, J. F., Gonzaga Alves, M. T., & Xaviera, F. P. (2015). Effects of Brazilian schools on
student learning. Assessment in Education: Principles, Policy & Practice. doi:10.1080/
0969594X.2015.1043856.
Stobart, G., & Eggen, T. (2012). High-stakes testing – value, fairness and consequences.
Assessment in Education: Principles, Policy and Practice, 19, 1–6.
Taut, S., Valencia, E., Palaciosa, D., Santelices, M. V., Jiméneza, D., & Manzia, J. (2015).
Teacher performance and student learning: Linking evidence from two national assess-
ment programmes. Assessment in Education: Principles, Policy & Practice. doi:10.1080/
0969594X.2014.961406.
Thomas, S., Salim, M., Muñoz-Chereau, B., & Peng, W.-J. (2012). Educational quality,
effectiveness and evaluation: Perspectives from China, South America and Africa. In: P.
Chapman, A. Armstrong, D. Harris, D. Muijs, D. Reynolds, & P. Sammons (Eds.),
School effectiveness and school improvement research, policy and practice. Challenging
the orthodoxy? (pp. 125–145). London: Routledge.
Timmermans, A., & Thomas, S. M. (2014). The impact of student composition on schools’
value-added performance: A comparison of seven empirical studies. School Effectiveness
& School Improvement, 26, 487–498. doi:10.1080/09243453.2014.957328.
Timperley, H., Wilson, A., Barrar, H., & Fung, I.(2007). Teacher professional learning and
development: Best Evidence Synthesis Iteration (BES). Wellington, New Zealand:
Ministry of Education. Available at: http://www.educationcounts.govt.nz/__data/assets/
pdf_file/0017/16901/TPLandDBESentire.pdf
Vegas, E., & Petrow, J. (2008). Raising student learning in Latin America: The challenge
for the 21st century. Washington, DC: World Bank.
Willms, J. D. (2006). Learning divides: Ten policy questions about the performance and
equity of schools and schooling systems. Montreal, Quebec: UNESCO Institute for
Statistics.
Young Lives. (2015). Peru. Retrieved October 20, 2015, from http://www.younglives.org.uk/
where-we-work/peru

Sue Swaffield
Faculty of Education, University of Cambridge, Cambridge, UK

Sally Thomas
Graduate School of Education, University of Bristol, Bristol, UK

You might also like