Professional Documents
Culture Documents
Textbook Assessing Contexts of Learning An International Perspective 1St Edition Susanne Kuger Ebook All Chapter PDF
Textbook Assessing Contexts of Learning An International Perspective 1St Edition Susanne Kuger Ebook All Chapter PDF
https://textbookfull.com/product/learning-theories-an-
educational-perspective-8th-edition-schunk/
https://textbookfull.com/product/children-and-screen-media-in-
changing-arab-contexts-an-ethnographic-perspective-tarik-sabry/
https://textbookfull.com/product/comparing-pathways-of-
desistance-an-international-perspective-1st-edition-ruwani-
fernando/
https://textbookfull.com/product/mortality-in-an-international-
perspective-1st-edition-jon-anson/
Parking: An International Perspective 1st Edition
Dorina Pojani (Editor)
https://textbookfull.com/product/parking-an-international-
perspective-1st-edition-dorina-pojani-editor/
https://textbookfull.com/product/managing-airports-an-
international-perspective-anne-graham/
https://textbookfull.com/product/public-finance-an-international-
perspective-joshua-greene/
https://textbookfull.com/product/the-concept-of-cultural-
genocide-an-international-law-perspective-novic/
https://textbookfull.com/product/women-and-positive-aging-an-
international-perspective-1st-edition-dykema-engblade/
Methodology of Educational Measurement and Assessment
Susanne Kuger
Eckhard Klieme
Nina Jude
David Kaplan Editors
Assessing
Contexts of
Learning
An International Perspective
Methodology of Educational Measurement
and Assessment
Series editor:
Bernard Veldkamp, Research Center for Examinations and Certification (RCEC),
University of Twente, Enschede, The Netherlands
Matthias von Davier, Educational Testing Service, Princeton, New Jersey, USA
This new book series collates key contributions to a fast-developing field of education
research. It is an international forum for theoretical and empirical studies exploring
new and existing methods of collecting, analyzing, and reporting data from
educational measurements and assessments. Covering a high-profile topic from
multiple viewpoints, it aims to foster a broader understanding of fresh developments
as innovative software tools and new concepts such as competency models and skills
diagnosis continue to gain traction in educational institutions around the world.
Methodology of Educational Measurement and Assessment offers readers reliable
critical evaluations, reviews and comparisons of existing methodologies alongside
authoritative analysis and commentary on new and emerging approaches. It will
showcase empirical research on applications, examine issues such as reliability,
validity, and comparability, and help keep readers up to speed on developments in
statistical modeling approaches. The fully peer-reviewed publications in the series
cover measurement and assessment at all levels of education and feature work by
academics and education professionals from around the world. Providing an
authoritative central clearing-house for research in a core sector in education, the
series forms a major contribution to the international literature.
Assessing Contexts
of Learning
An International Perspective
Editors
Susanne Kuger Eckhard Klieme
Department for Educational Quality Department for Educational Quality
and Evaluation and Evaluation
German Institute for International German Institute for International
Educational Research (DIPF) Educational Research (DIPF)
Frankfurt, Germany Frankfurt, Germany
v
vi Foreword
and repeat studies of the same subject matter area only appeared 15–20 years apart.
Starting with TIMSS 1995, PISA 2000, and PIRLS 2001, it became possible to
obtain information about achievement trends over cycles of 3–5 years, which made
the results much more useful and meaningful, because interpretations did not have
to be confined to comparisons with other countries. Typically, the early studies
could not claim that the samples of students were representative of the population,
but through better student and school registers, and through advances in sampling
techniques, well-founded claims of representativeness could now be made.
Furthermore, even though the early ILSAs often used matrix-sampling designs, the
psychometric techniques available did not allow results for students taking different
items to be put on the same scale; now, however, this is easily done with techniques
based on item-response theory. Thus, there has been immense methodological
development from the early ILSAs to the current generation of studies.
However, while great empirical advances have been made, a corresponding
degree of theoretical progress cannot be seen. It has always been the ambition in
ILSA research to build a body of empirically grounded theoretical knowledge about
educational processes and outcomes, but reviews both of early and recent ILSA
studies have concluded that this aim has not been reached. The fundamental idea of
ILSA research is to take advantage of differences between educational systems, but
this idea has been challenged by the fact that the educational systems are embedded
in countries with different cultures, languages, economies, and historical back-
grounds. This heterogeneity makes it virtually impossible to make correct causal
inferences about determinants of achievement, and particularly so when only cross-
sectional data is available, as is typically the case in ILSA. Yet another reason for the
failure to create a body of theoretical knowledge is that the ILSA researchers have
put much less effort into conceptualizing and measuring possible explanatory fac-
tors than in defining and measuring the cognitive outcomes. This is where the
research reported in the present volume enters into contention.
This book uses the field trial for PISA 2015 as an illustrative example, aiming to
develop a theoretical foundation for ILSA research and to identify the most impor-
tant contextual constructs in different domains. The book is structured into four
main parts. Part I is an introduction, which in four chapters describes the theoretical
and methodological framework: “Dimensions of Context Assessment,” “The
Assessment of Learning Contexts in PISA,” “The Methodology of PISA: Past,
Present, and Future,” and “An Introduction to the PISA 2015 Field Trial: Study
Design and Analyses Procedures.” Part II comprises four chapters, each of which
focuses on different aspects of student background: “Social Background,” “Ethnicity
and Migration,” “Early Childhood Learning Experiences,” and “Parental Support
and Involvement in School.” Part III deals with outcomes of education beyond
achievement and comprises five chapters: “Bias Assessment and Prevention in
Foreword vii
authors observe that if the sampling procedure of PISA could be changed in such a
way that data from the same schools are collected in two consecutive cycles, it
would be possible to investigate to what extent changes in specific aspects of school
learning environments are associated with changes in cognitive and affective stu-
dent learning outcomes. Such a design could dramatically improve the possibilities
for developing empirically grounded theoretical knowledge about school-level edu-
cational processes and outcomes. This is just one of a large number of useful sug-
gestions presented in this volume.
In summary, this book reports work that aims to put ILSA on a solid theoretical
foundation, to identify the most important contextual constructs in different
domains, and to propose indicators that are suitable for measuring the constructs. In
conjunction with this published volume, an electronic repository has also been built,
in which the actual survey items are stored along with their metadata. While the
work comprised in this volume was conducted in the context of the field trial for the
PISA 2015 study, the contribution goes much beyond this particular investigation
and should rather be seen as creating important infrastructure for the future develop-
ment of ILSAs in general.
This edited volume and the additional electronic material are assembled to augment
the interplay between educational effectiveness research (EER) and school achieve-
ment studies in international large-scale assessments (ILSAs) by bringing together
state-of-the-art research knowledge and current practical and policy discussions on
a wide range of EER topics. Furthermore, this volume seeks to increase the degree
of transparency in how this knowledge has been applied to develop and evaluate
questionnaire material for the context assessment in an example ILSA: PISA 2015
(Programme for International Student Assessment). The intentions of this volume
are fourfold: (1) to illustrate how a close collaboration between EER and ILSA can
inspire both fields and increase their scope, (2) elucidate exemplary topics of inter-
action, (3) highlight challenges that arise during fieldwork, and (4) provide a sound
theoretical foundation for future work at this interface. We hope that this volume
and the additional electronic material will inspire and facilitate the future work of
others in education research, monitoring, and evaluation.
The editors of this volume were involved in the preparation and analysis of dif-
ferent ILSAs in the past and were responsible for assembling, trialing, and evaluat-
ing the context assessment for PISA 2015 under contract with the Organisation for
Economic Co-operation and Development (OECD). We therefore use PISA 2015 as
an example to relate the general and sometimes abstract discussions in this volume
to a real-life study. This choice provides an opportunity to incorporate field trial
instruments into the publication and to relate all theoretical and methodological
considerations to online material that can be used in further research, as well as to
official OECD publications for PISA 2015 (framework, questionnaire instruments,
and data). All authors in this volume were involved in questionnaire development
and all contributed to the context assessment in their field of expertise.
This volume is divided into an introductory part and three subsequent parts,
which address several thematic frameworks for a number of policy topics. In the
introductory Part I, three chapters first summarize particulars of international con-
text assessments, the changes that have been made to context assessment over time,
and methodological considerations. A fourth chapter introduces the PISA 2015 field
trial from the context assessment perspective.
ix
x Preface
Parts II, III, and IV of this volume include a series of chapters on student back-
ground, student education outcomes beyond achievement, and learning in schools.
Each chapter presents a thematic framework for one topic in EER and a literature
review of relevant theoretical concepts and their practical and policy relevance. The
chapters also discuss which constructs should ideally be assessed to enable report-
ing on the particular topic in sufficient breadth (e.g., to enable country comparisons
on different topics) and detail (e.g., to link them to student performance). In addi-
tion, the chapters discuss possible limitations that result from practical or policy
considerations involved in international school effectiveness assessments. Although
the reviews refer to the EER literature in general, and often take into account exam-
ples from different ILSAs, most examples refer to the PISA 2015 cycle.
A digital appendix to this volume contains the questionnaire material that was
developed for the field trial of PISA 2015. The “Datenbank zur Qualität von Schule”
(a questionnaire repository on school quality) at the German Institute for International
Educational Research in Frankfurt, Germany (http://daqs.fachportal-paedagogik.
de/), hosts questionnaire material in its English and French source versions, as well
as the nationally adapted and translated versions of countries participating in the
study. Additionally, the repository lists statistics to evaluate the functioning of each
question at the international and national levels, based on field trial data.
We hope that this volume promotes future secondary analyses with ILSA data;
facilitates the development of further educational effectiveness research, monitor-
ing, and evaluation studies; helps to justify conclusions and recommendations based
on ILSA results; adds transparency to the discussions and further development of
ILSAs; and thus contributes to the interplay between EER and ILSAs.
Part I Introduction
1 Dimensions of Context Assessment........................................................ 3
Susanne Kuger and Eckhard Klieme
2 The Assessment of Learning Contexts in PISA .................................... 39
Nina Jude
3 The Methodology of PISA: Past, Present, and Future ........................ 53
David Kaplan and Susanne Kuger
4 An Introduction to the PISA 2015
Questionnaire Field Trial: Study Design
and Analysis Procedures ......................................................................... 75
Susanne Kuger, Nina Jude, Eckhard Klieme, and David Kaplan
xi
xii Contents
Contents
1.1 Introduction 4
1.2 Learning in Contexts Worldwide 7
1.2.1 Taxonomies of Topics and Constructs 7
1.2.2 Common Content Areas 9
1.2.2.1 Education Outcomes 11
1.2.2.2 Student Background 12
1.2.2.3 Teaching and Learning Processes 13
1.2.2.4 School Policies and Governance 13
1.3 Expansion of the Common Content 14
1.3.1 Rationale for Expansion 14
1.3.2 Expanding Target Groups 16
1.3.3 Expanding the Student Sample 17
1.3.4 Expanding Available Data Sources 18
1.4 Increasing the Analytical Power of ILSAs 19
1.4.1 Reusing Different ILSA Products 20
1.4.2 Broadening the Variety of Analytical Approaches 20
1.4.3 Combining Different ILSAs in Secondary Analyses 21
1.5 Particulars of International Context Assessments 25
1.5.1 Breadth and Variety of Topics and Assessment Formats 25
1.5.2 Heterogeneity of Learning Contexts from a Global Perspective 27
1.6 ILSAs for EER and EER for ILSAs 30
References 31
why this lowest common denominator should be enriched according to the respec-
tive study goals and designs of the different programs. This chapter discusses some
possible directions and further, provides suggestions as to how the scope of ILSAs
may be increased to provide better information about education research and policy
in the future. Although this framework model is applicable to learning contexts
world-wide, context assessments in ILSAs need to take into account the many simi-
larities and differences of education systems world-wide. A final aim of this chapter
therefore is to discuss some critical issues that arise from an international perspec-
tive in ILSAs.
1.1 Introduction
multitude of researched factors (Scheerens and Bosker 1997; Scheerens et al. 2005),
or even to provide comprehensive theoretical models of educational effectiveness,
such as the Dynamic Model developed by Creemers and Kyriakides (2008, 2010).
Typically, these reviews summarize study results obtained in individual studies in
single (or few) countries at a single point in time, and there is a tendency to over-
generalize these fractured research results, and their applicability, to all students in
all school types and school systems at all times. To date only a few attempts have
been made to increase generalizability or results by actually including different
school systems, teaching traditions and cultural settings in research studies, so as to
empirically compare effects across countries. The exceptions are international
large-scale assessments (ILSAs), which are best known for their use in system-wide
educational monitoring. Due to their long history and their relatively large number
of participating countries, the following most prominent examples (in alphabetical
order; for a more comprehensive list see Schwippert and Lenkeit 2012), cover edu-
cation worldwide, and will be discussed repeatedly in this chapter:
• Progress in International Reading Literacy Study (PIRLS)
• Programme for International Student Assessment (PISA)
• Trends in International Mathematics and Science (TIMSS)
Furthermore, there are ILSAs that cover additional regions of the world, or non-
student samples. The most influential ILSA programs with such a focus are, most
likely:
• Programme d’Analyse des Systèmes Educatifs de la CONFEMEN (the standing
committee of ministers of education of francophone African countries; PASEC)
• Programme for the International Assessment of Adult Competencies (PIAAC)
• The Southern and Eastern Africa Consortium for Monitoring Educational Quality
(SACMEQ)
• The Teaching and Learning International Survey (TALIS)
• Teacher Education and Development Study in Mathematics (TEDS-M)
The overarching and initial goal of such ILSAs is to provide indicators on the
effectiveness, equity, and efficiency of educational systems (Bottani and Tuijnman
1994), to set benchmarks for international comparison, to stimulate curriculum
development, and to monitor trends over time (Klieme and Kuger 2016; Mullis et al.
2009a). Consequently, these programs have attracted attention in many countries
and have exerted sometimes far-reaching influence on education policy. In addition,
researchers increasingly draw on the results of these assessments to study, on the
one hand, the universality and generalizability of certain findings in educational
effectiveness and, on the other hand, the respective national, regional, cultural, and
other group-specific features that may moderate universal mechanisms (Hiebert
et al. 2003). This volume is intended as a bridge between EER and ILSA.
In the context of this book we consider ILSAs as international assessments of
education topics that target large and representative samples of students and/or
teachers, as well as other stakeholders in education such as school principals or parents.
6 S. Kuger and E. Klieme
Here, the term “assessment” is not restricted to tests addressing achievement, compe-
tencies or other cognitive outcomes. Even in studies that are well-known for mea-
suring student achievement and literacy, such as PIRLS and PISA, the vast majority
of measures are used to contextualize these cognitive outcomes, adding both non-
cognitive outcomes (e.g., student motivation and well-being) and measures of struc-
tures and processes in education. This wider range of measures is in the following
discussion called “context assessment”. OECD’s (Organisation for Economic
Co-operation and Development) TALIS is included in the term ILSA because it
covers learning contexts at school-, classroom- and teacher-level, and includes the
assessment of valuable noncognitive teacher outcomes (e.g., self-efficacy). This
chapter uses the term “ILSA programs” (e.g., PISA, TIMSS) while also referring to
individual “ILSA studies or cycles” (e.g., PISA 2015, TIMSS 2011).
The relationships—i.e., potential dependencies, contributions, and benefits—
between EER and ILSAs are mutual in nature (Klieme 2012): On the one hand, a
sound framework of context assessment of any ILSA study must take into account
theoretical considerations, modeling approaches and research results in EER to
develop a meaningful system of reporting indicators (Bryke and Hermanson 1994;
Kaplan and Elliott 1997). Research on educational effectiveness, on the other hand,
can inform ILSAs in two ways. First, the description of education in different cul-
tures, school systems, and school contexts that is provided by ILSA studies can
inspire EER to discover new fields of research. National and international patterns
or regional peculiarities in ILSA data are easily accessible through screening pub-
licly available ILSA data, and can trigger research questions that lead to the careful
development of smaller, targeted EER studies.
Second, ILSAs carefully develop research instruments and methodology that
may be used in further studies both within and across countries. The high quality
standards typically involved in the preparation and implementation of ILSAs, and
the large intercultural sample assessed, provide EER with high quality, culturally
adapted, policy-relevant material in a large number of languages. Therefore, ILSAs
offer an unmatched source of ready-to-use material for EER that has been devel-
oped and refined under strict quality guidelines and discussed by education, policy,
questionnaire, and survey method experts.
This chapter further reflects on this relation between EER and ILSAs. It first
summarizes taxonomies, to organize the content of context assessments in ILSAs
and the similarities of content across studies (Sect. 1.2). This chapter further dis-
cusses how to enlarge the scope and value of ILSAs for education policy makers
and researchers by either expanding and developing the existing ILSAs further
(Sect. 1.3) or by making better use of their already existing output (Sect. 1.4).
Finally, this chapter points out major differences of context assessment and cogni-
tive assessments in ILSAs, and how they influence the planning, preparation and
implementation of context assessments in ILSAs (Sect.1.5).
1 Context Assessment 7
Critical variables in EER have long been grouped into input and output factors
(Reynolds et al. 2000). Such a framework relates education to a production function
in an organization: i.e. it establishes an analogy to economic models and assumes
that certain input factors condition the creation of certain outputs. This very simpli-
fied view was then expanded to the Context-Input-Process-Output model (CIPO-
model; Purves 1987; Scheerens and Bosker 1997). The CIPO model groups together
factors of more distal and proximal conditions of education (context and input),
education processes and outcomes. In this model the term “context” is used in a
restricted sense, only referring to economic, social, and cultural factors outside of
schools, compared to the more comprehensive approach taken in the present chapter.
The advantage of this CIPO framework is that it covers the wide variety of topics
in EER:
– (C) general societal conditions
– (I) conditions of the education system, and school, classroom, and individual
(e.g., vertical and horizontal differentiation of the school system, organizational
structure of a school, classroom composition of the student body, students’
cultural, ethnic, and socio-economic background)
– (P) educational processes in diverse learning environments (e.g., system-wide
evaluation procedures, school leadership and teacher collaboration practices,
teaching activities and learning opportunities in classrooms, parental involve-
ment in student learning)
8 S. Kuger and E. Klieme
for example, students’ well-being and life satisfaction, general value beliefs, health
or working habits, while domain-specific noncognitive outcomes include mathe-
matics or science self-efficacy, interests, attitudes towards science and the relation-
ship of education and career aspirations to certain school subjects. Resources can
also be dedicated to education in general (e.g., school facilities) or to a certain sub-
ject (e.g., textbooks, library, laboratory staff, and teachers), and education processes
can be either domain-specific (e.g., teaching a certain subject) or general (e.g., allo-
cating resources to schools, school climate, school and system evaluation).
Typically, ILSA studies gather information on the majority of these categories:
i.e., both domain-general and domain-specific information (across and) on system,
school, and individual levels, about input factors, processes, and education outcomes.
Although the study design and the main research question of an ILSA heavily
influence the exact specification of test and questionnaire material, there is a small-
est common denominator: a set of information of such essential value to EER that it
is included in almost all ILSA studies in a similar way. The following subsection
provides a description of this set.
The majority of studies in EER and ILSAs include a set of common material in their
context assessment, independent of their study goals, design, or sampling strategy.
There are three degrees of commonality. First, there are identical questions across
some ILSAs (mainly across related ILSAs from the same organization); examples
are questions on participant’s gender or educational background. Identical questions
typically target standard socio-demographic information about the participants or
the structural conditions of learning environments. A second degree of commonality
is at the conceptual level. This is more frequently found in questions seeking more
detailed background information about the participant and domain-general content
areas. For example, all ILSAs with student participants assess parental education
background, albeit in very different ways (e.g., asking the student or his parents),
and many assess learning motivation. A third degree of commonality can be seen in
parallel assessments of certain constructs. As explained above, some features of the
learning context can be assessed by targeting specific domains or conditions. While
the wording of such a construct may be customized to the domain of a certain study,
it may still be parallel to the wording in studies that target other domains. For exam-
ple, while a study on science achievement assesses “interest in science”, a study on
math achievement assesses “interest in mathematics”. Such a parallelism enables
later comparisons of relations between achievement and “domain-specific interest”.
Another example could be assessing reasons for school choice in a parent sample
that chose a certain school for their children’s education and in a teacher sample that
chose a future workplace.
10 S. Kuger and E. Klieme
1
This and the following three sections were based on the OECD draft framework for context
assessments for PISA 2015 (2013: http://www.oecd.org/pisa/pisaproducts/PISA-2015-draft-
questionnaire-framework.pdf), which was authored by the authors of this chapter.
2
For early exceptions to international longitudinal studies run on limited sets of countries and with
heavy methodological challenges, see Burstein (1993), Olmert and Weikart (1995); for current
preparations for a longitudinal ILSA see OECD (2015).
12 S. Kuger and E. Klieme
PISA for example, has long included school type, location, and size, and uses aggre-
gated student data at the school level to represent student composition.
EER can make a major contribution to monitoring education—the initial and most
important goal of ILSAs—by embedding results in EER research and developing
policy advice in response. As policy makers have limited direct impact on teaching
and learning processes, EER delivers information on school-level factors that may
help improve schools, and thus indirectly improve student learning. As with teacher
and teaching variables, school effectiveness research has built a strong knowledge
base showing that “essential supports” promote school effectiveness (Bryk et al.
2010; see also Creemers and Reezigt 1997; Scheerens and Bosker 1997):
14 S. Kuger and E. Klieme
assessments can therefore serve two different purposes: First, it can broaden the
variety of topics reflected in the indicator system of the framework and thus increase
the number of policy topics that the ILSA serves (horizontal expansion) or second,
it can deepen our understanding of the theoretical model behind an existent set of
indicators (vertical expansion; Bryk and Hermanson 1994).
Horizontal expansion broadens the range of topics included in an ILSA context
assessment. This could either imply that participants answer more questions that are
closely related to each other (e.g., asking students not only about their heritage lan-
guage but also about their language preferences inside and outside school, their
subjective perceptions of the value of bilingualism, etc.). As another example, a
horizontal expansion could introduce new topics of assessment (e.g., recent cycles
of PISA included more constructs on student’s well-being and personality traits than
did older cycles). In both cases, horizontal expansion increases the policy outreach
of a study by delivering information on more topics than before.
Vertical expansion increases the depth and detail of information on a topic that is
already included in the common content areas. More-detailed assessment of a cer-
tain topic can help to improve the precision and validity of the assessment. For
example, a topic could be split up into more than one question to cover several
facets of a construct (e.g., the existence of school evaluation can be assessed with a
very broad yes/no question, or go into more detail to include questions on purposes,
measures, time points and uses of school evaluation). As another example, a vertical
expansion could ask a question from multiple perspectives to triangulate informa-
tion (e.g., both the principal of a school and the teachers might have valuable but not
necessarily congruent opinions on the school’s leadership). Most importantly, a ver-
tical expansion helps to cover a theoretical model of education effectiveness more
precisely, and thus could increase its explanatory power. Keeves and Lietz’s (2011)
observation of a trend towards more vertical expansion across context assessments
of ILSAs in the last decades, without corresponding endeavors to assemble the
respective EER background, is an important motive for this volume. The topical
chapters in Parts II-IV each compile partial model frameworks for one topic of
EER, to encourage further research and more in-depth modeling in this important
area.
Given that, typically, each ILSA sets aside a well specified, limited amount of
time for context assessment, the two directions of expansion are at conflict with
each other. An expansion in either direction requires additional testing time, and one
can either broaden the number of topics included in context assessment or else
increase the detail and precision of assessment. It is not possible to expand content
in both directions and still keep within the traditional time limitations. A rotated
context questionnaire design was implemented in PISA 2012 that can, in principle,
provide for horizontal and vertical expansion. But the implementation of a rotated
design for the context questionnaire in an ILSA leads to a number of methodologi-
cal challenges that are currently still under debate (see Adams et al. 2013; Kaplan
and Su 2016).
16 S. Kuger and E. Klieme
Typically, ILSA programs assess data from students and schools. Students are tested
using a battery of test items to assess their performance in the study domain; in a
second step they fill out context assessment questionnaires. In the majority of stud-
ies, school questionnaires are filled in by the school principal or by any other eligi-
ble school representative. Although these two groups of participants can probably
provide reliable and essential information about students’ education, certain limita-
tions underlie this information. There may be, for example, time restrictions, sub-
jectivity bias, and limitations on knowledge, to name just a few. This and the
following section therefore, discuss some innovative ideas on how to expand these
target groups and data sources to allow for the assessment of more and more precise
content in alternative formats.
Several ILSAs already include additional target groups, to learn more about
teaching and learning in school and at home. The most frequently approached addi-
tional target groups are teachers and parents. Participants from both groups can add
valuable insights into background information (context and input), education pro-
cesses and outcomes in school, out-of-school and at home. Teacher questionnaires
have been included in all cycles of TIMSS and PIRLS assessments and were first
introduced to PISA in the 2015 cycle. Teachers are valuable sources of information
about the curriculum in a certain school subject, about classroom teaching pro-
cesses, teacher cooperation and teaching resources in their school; about their initial
teacher training and professional development, their teaching goals, interests, and
enthusiasm, teaching incentives, assessment and evaluation practices, and many
more topics related to teacher background and everyday teaching processes. In
addition to teacher questionnaires directly implemented in a student ILSA, results
from the TALIS or the TEDS-M can provide relevant information about teachers at
the country level. Under certain conditions, student data from one ILSA and teacher
data from another, can even be statistically linked (Kaplan and McCarthy 2013).
Valuable information can also be gathered from students’ parents. Parent ques-
tionnaires have been included in TIMSS, PIRLS and PISA assessments for several
cycles already. Parents are most frequently asked to provide information on early
childhood education, previous and current home learning activities and resources,
parental beliefs about education and future career paths, additional family back-
ground information, their cooperation with their child’s school, and educational
decision making, to name just a few (Hoover et al. 2013; Klieme and Kuger 2016).
A further source of information lies at the system level: curriculum and policy
experts could provide information about allocated resources to the educational sys-
tem, about policy reforms and the intended curriculum. Unfortunately, such ques-
tionnaires are not included in all ILSAs (e.g., Hoover et al. 2013).
1 Context Assessment 17
Language: German
Seltsame Käuze
Geschichten aus dem Tierleben
Seite
Schnüffel, der Igel 1
Raben 9
Unser Eisvogel 17
Waldkauz 25
Ohreulen 33
Rothals und Grauwange 41
Vom Hecht 48
Die Papierburg 56
Hermännchen 64
Raubritter 72
Fasan 83
Neuntöter 92
Zwergreiher 100
Käuze im Dorfe 108
Vom Pionier im Samtrock 115
Königin Apis 123
Hüttenjagd 132
Vom Aal 149
Haselmaus 157
Frau Duftig und ihre Kinder 165
Schnüffel, der Igel