You are on page 1of 167

Teachers’ Knowledge of Error

Analysis

School of Education, University of the Witwatersrand


Funded by the Gauteng Department of Education

9 January 2012

Updated June 2013


Teachers’ knowledge of Error Analysis

School of Education, University of the Witwatersrand


Funded by the Gauteng Department of Education
Contents page

Contents page...........................................................................5
Tables.........................................................................................7
Acknowledgements.................................................................9
Introduction............................................................................11
Report Plan..............................................................................15
Section One: Teacher knowledge of mathematical errors
...................................................................................................19
Section two: Activity and Process.......................................29
2.1. The Activity....................................................................................30
Section three: Evaluation analysis methodology.............37
3.1 Aim of the evaluation....................................................................38
3.2 Items evaluated...............................................................................38
3.3 Evaluation Instrument...................................................................39
3.3.1 The coding template (see Appendix 4):.........................................................40
3.4 Training of coders and coding process.......................................44
3.5 Validity and reliability check......................................................44
3.6 Data analysis....................................................................................44
Section Four: Analysis of data.............................................47
4.1 An overview of the comparison of Round 1 and Round 2.....48
4.2. Analysis of individual criteria....................................................49
4.2.1 Procedural explanations...................................................................................49
4.2.2 Conceptual explanations..................................................................................55
4.2.3 Awareness of Mathematical Error..................................................................61
4.2.4 Diagnostic reasoning........................................................................................68
4.2.5 Multiple explanations.......................................................................................75
4.2.6 Use of the everyday in explanations of the error........................................84
4.2.7 Comparative strengths and weaknesses between the two sets of
grouped grades............................................................................................................90
4.2.8 Comparative strengths and weaknesses according to content area.........93
Section Five: General findings............................................95
5.1 What are the findings about teachers reasoning about
learners’ errors?.....................................................................................96
5.2 Teachers’ experiences of the error analysis activity.................97
5.3 Findings from the quantitative analysis..................................100
5.4 Summary........................................................................................109
Section Six: Implications for professional development:
developing diagnostic judgement....................................113
Recommendations for professional development and for
further research....................................................................123
Recommendations for professional development.............................................123
Recommendations for further research................................................................123
References.............................................................................125
Appendices:...........................................................................129
Appendix 1.................................................................................................................130
Appendix 2.................................................................................................................134
Appendix 3.................................................................................................................137
Appendix 4.................................................................................................................140
Appendix 5.................................................................................................................142
Appendix 6.................................................................................................................145
Appendix 7.................................................................................................................150
Appendix 8.................................................................................................................155
Appendix 9.................................................................................................................159
Appendix 10...............................................................................................................164
Appendix 11...............................................................................................................168

5
Tables
Table 1: Domains of teacher knowledge and related error analysis categories..................21
Table 2 “Error-focused activities” and “error-related activities”.........................................30
Table 3: Round 1 Error analysis.................................................................................................32
Table 4: Round 2 Curriculum Mapping and Error Analysis.................................................33
Table 5: Number of items analysed by groups (Rounds 1 and 2).........................................35
Table 6: Sample summary...........................................................................................................38
Table 7: Grades 3-6 procedural explanations demonstrated in teacher text explanations50
Table 8: Grades 7-9 procedural explanations demonstrated in teacher text explanations51
Table 9: Procedural explanations demonstrated in teacher test explanations....................52
Table 10: Grades 3-6 conceptual explanations demonstrated in teacher text explanations
........................................................................................................................................................55
Table 11: Grades 7-9 conceptual explanations demonstrated in teacher text explanations
........................................................................................................................................................56
Table 12: Conceptual explanations demonstrated in teacher test explanations.................58
Table 13: Grades 3-6 awareness of mathematical error demonstrated in teacher text
explanations..................................................................................................................................62
Table 14: Grades 7-9 awareness of mathematical error demonstrated in teacher text
explanations..................................................................................................................................63
Table 15: Awareness of the mathematical error demonstrated in teacher test explanations
........................................................................................................................................................65
Table 16: Grades 3-6 diagnostic reasoning demonstrated in teacher text explanations....68
Table 17: Grades 7-9 diagnostic reasoning demonstrated in teacher text explanations....69
Table 18: Diagnostic reasoning in relation to the error demonstrated in teacher test
explanations..................................................................................................................................71
Table 19: Grades 3-6 multiple explanations demonstrated in teacher text explanations. .77
Table 20: Grades 7-9 multiple explanations demonstrated in teacher text explanations. .78
Table 21: Multiple explanations of the error demonstrated in teacher test explanations. 81
Table 22: Grades 3-6 use of the everyday demonstrated in teacher text explanations of
the error.........................................................................................................................................85
Table 23: Grades 7-9 use of the everyday demonstrated in teacher text explanations of
the error.........................................................................................................................................86
Table 24: Use of the everyday in explanations of the error demonstrated in teacher test
explanations..................................................................................................................................88
Table 25: Content areas according to strength and weakness in explanations...................93
Table 26: An example of an explanation that is accurate but incomplete.........................110
Table 27: Grade 5 test item explanations illustrating poor diagnostic judgement...........111
Table A1: Category descriptors for “procedural explanations”.........................................145
Table A2: Criterion 1 – Procedural explanation of the choice of the correct solution in
relation to one item....................................................................................................................146

6
Table A3: Criterion 1 – Procedural explanation of the choice of the correct solution in
relation to a range of items.......................................................................................................147
Table A4: Category descriptors for “conceptual explanations”..........................................150
Table A5: Criterion 2 – Conceptual explanation of the choice of the correct solution in
relation to one item....................................................................................................................151
Table A6: Criterion 2 – Conceptual explanation of the choice of the correct solution in
relation to a range of items.......................................................................................................152
Table A7: Category descriptors for “awareness of mathematical error”...........................155
Table A8: Criterion 3 – Awareness of the error embedded in the incorrect solution in
relation to one item....................................................................................................................155
Table A9: Criterion 3 – Awareness of the error embedded in the incorrect solution in
relation to a range of items.......................................................................................................157
Table A10: Category descriptors for “diagnostic reasoning”..............................................159
Table A11: Criterion 4 – Diagnostic reasoning of learner when selecting the incorrect
solution in relation to one item................................................................................................160
Table A12: Criterion 4 – Diagnostic reasoning of learner when selecting the incorrect
solution in relation to a range of items...................................................................................161
Table A13: Category descriptors for “use of everyday”......................................................164
Table A14: Use of the everyday exemplars............................................................................165
Table A15: Category descriptors for “multiple explanations”............................................168
Table A16: Multiple explanations exemplars........................................................................168

7
Acknowledgements
This report was prepared by:
Professor Yael Shalem (Wits School of Education)
Ingrid Sapire (Wits School of Education)
Tessa Welch (Saide)
Maryla Bialobrzeska (Saide)
Liora Hellmann (Saide)

We would like to thank the following people for their role in the preparation of the
report:
M Alejandra Sorto (Maths Education Consultant)
David Merand (Maths Education Consultant)
Gareth Roberts (Statistical computation and analysis)
Carola Steinberg (Editing)

The project team for DIPIP Phases 1 and 2


Project Director: Prof Yael Shalem
Project Leader: Prof Karin Brodie
Project coordinators: Lynne Manson, Nico Molefe, Ingrid Sapire
The project team and evaluators would like to thank the Gauteng Department of
Education and in particular Reena Rampersad and Prem Govender for their support of
the project.

We would also like to thank all of the teachers, departmental subject facilitators, Wits
academic staff and students and the project administrators (Karen Clohessy, Shalati
Mabunda and team) who participated over the three years of the project.

8
Introduction
New imperatives held in high regard in South Africa are the growing emphasis on
learners’ performance as an indicator of accountability and the use of assessment to
inform learners’ learning and to monitor teachers’ work. Current policy is very clear
about the accountability imperative. The Department of Education has stipulated that
teachers and district officials monitor learners’ performance continuously and write
reports on their progress. It has also conducted and participated in a range of regional
and international evaluations, amongst them: the Trends in International Mathematics
and Science Study (TIMSS: 1995, 1999, 2003); Systemic Evaluation for grades 3 and 6 in
Numeracy and Literacy; participation in Southern and Eastern Africa Consortium for
Monitoring Educational Quality (SACMEQ: 2003,2007), the Progress in International
Reading Literacy Study (PIRLS), and the most recent one being the Annual National
Assessment (ANA). As part of the regional evaluation exercise, the Gauteng
Department of Education used the ICAS test (International Competitions and
Assessments for Schools). The ICAS test is an international standardized, primarily
multiple-choice test, which was administered in Gauteng to a population of about 55 000
learners in public and private schools in Grades 3 to 11 in 2006, 2007 and 2008. The tests
and data of learners’ errors on the 2006 and 2007 tests of Grade 3-9 provided the starting
point for teacher engagement with learners’ errors, the main subject of this report.

The data from standardised evaluations has been used by mathematical experts,
economists and statisticians at a systemic level and solely for benchmarking.1 Results
have been used to monitor the process of educational reform.2 In this process the
common messages heard by the public in general and by teachers in particular are that
“high participation” has come at the expense of “poor quality”; that the system
demonstrates low value and poor accountability; and that South Africa performs poorly
in comparison to countries in Africa poorer than her. Teachers were blamed and
shamed for their low subject matter content knowledge, and poor professionalism. The
general view that has emerged is that good and committed teachers are pivotal for the
desired change. As important as they are, these kind of messages have done very little
to change the results, and certainly contributed to teachers’ low morale and loss of
public confidence in the capacity of education to deliver a meaningful change (Taylor,
2008; Shalem & Hoadley, 2009; Van den Berg, 2011). Important for our task here, is that
although the Department of Education (and the public) expect teachers to use the results
to improve their practice accordingly, to date teachers have not been invited to interpret
the results nor been shown how to integrate them into their practice. Teachers have not
1
In South Africa, matric exams, Grade 3 and 6 systemic evaluation and international and regional tests have been used to
benchmark the performance of the system or as a tool which, by controlling social and economic factors that affect
learners’ performance, reports to the public on learners’ standards of proficiently (van Den Berg & Louw 2006; van den
Berg & Shepherd; 2008; Taylor 2007, 2008).
2
In South Africa since 1994 there have been three curriculum changes to which teachers and learners have had to adjust.
9
been given the opportunity to develop skills to interpret results from systemic
evaluations. This is despite the current rhetoric of alignment between standardised
assessment and classroom teaching. With the advent of the Annual National
Assessments (ANA), there have been new kinds of pronouncements. For example, the
Department of Basic Education is reported to have stated the following:
ANA is intended to provide regular, well-timed, valid and credible data on pupil
achievement in the education system. Assessment of pupils’ performance in the GET
Band (Grades 1- 9) has previously been done at school level. Unlike examinations that
are designed to inform decisions on learner promotion and progression, ANA data is
meant to be used for both diagnostic purposes at individual learner level and decision-making
purposes at systemic level. At the individual learner level, the ANA results will provide
teachers with empirical evidence on what the learner can and/or cannot do at a particular
stage or grade and do so at the beginning of the school year. Schools will inform parents
of their child’s ANA performance in March 2011 (our emphasis).3

The above statement includes two important claims, which together suggest an
approach that attempts to go beyond the current use of standardized assessment for
benchmarking purposes. The first claim is that assessment should now be dealt with on
two levels and in two forms: at a school level, through the use of specific and localised
forms of assessment and at a national level, through the use of general and standardised
forms of assessment. This is not a new claim and is in line with the Department of
Education policy of formative assessment. It suggests that worthwhile assessment data
should consist of evidence that is valid for specific learning contexts (or schools)
together with reliable and generalizable evidence that represents different learning
contexts (or schools). In assessment argot, worthwhile assessment data consist of
external assessment and classroom assessment (Gipps, 1989; Brookhart et al, 2009;
Looney, 2011)4. The second claim, which is particularly relevant for this report, is new
and suggests a new direction in the Department of Education policy. This claim
suggests that data from external assessment is intended to be used diagnostically both at
the level of the individual learner and at the systemic level of a grade cohort. The idea
of informing local knowledge using a systemic set of evidence, diagnostically, is not
sufficiently theorized. Teachers have always needed to recognise learners’ errors, a skill
without which they would not have been able to assess learners’ work. The difference
now is that teachers are required “to interpret their own learners’ performance in
national (and other) assessments” (DBE and DHET, 2011, p2) and develop better lessons
on the basis of these interpretations. According to the above statements, the tests will
provide teachers with information that will tell them, in broad strokes, how close or far
their learners are from national and/or international standards. In other words, teachers
will be given a set of “evaluation data” in the form of pass rates and proficiency scores.
This type of information is commonly known as “criterion-referenced proficiency
3
Media statement issued by the Department of Basic Education on the Annual National Assessments (ANA): 04 February
2011. http://www.education.gov.za/Newsroom/MediaReleases/tabid/347/ctl/Details/mid/1389/ItemID/3148/Default.aspx
4
Gipps argues that externally administered standardised assessments (systematic assessment) are considered more
reliable, in terms of their process (design and marking). Classroom assessment, if well designed and on-going, provides a
variety of contextually-relevant data.
10
classification” (Looney, 2011, p27). What is not clear and has not been researched is
what is involved on the part of teachers in interpreting this data: what do teachers do
and what should teachers be doing so that the interpretation process is productive? How
does teachers’ tacit knowledge about learners’ errors (which they have acquired from
years of marking homework and tests, encountering learners’ errors in teaching, or
through hearing about errors from their colleagues) inform or misinform their reasoning
about evaluation data? In what ways should teachers work with proficiency information
when they plan lessons, when they teach or when they design assessment task?

Important to remember is that curriculum statements about assessment standards,


recurring standardised assessments, or reports on pass rates and proficiency levels do
not, in themselves, make standards clear (let alone bring about a change in practice).
Reported data of standardized assessment provides information about what learners can
or can’t do but does not analyse what the learners do not understand, how that may
affect their poor performance, and how preferred instructional practice could afford or
constrain addressing these difficulties (Shepard, 2009, p37). These are ideas that teachers
need to interpret from the evaluation data. Hence Katz et al are correct when they say:
Data don’t “tell” us anything; they are benign... The meaning that comes from
data comes from interpretation, and interpretation is a human endeavour that
involves a mix of insights from evidence and the tacit knowledge that the
group brings to the discussion. (2009, p.28) 5

These kinds of questions, stated above, inform our approach to teachers’ learning about
their practice through engagement with error analysis. They motivated our professional
development work in the Data-Informed Practice Improvement Project. The Data-
Informed Practice Improvement Project (henceforth, the DIPIP project) has been the first
attempt in South Africa to include teachers in a systematic way in a process of
interpretation of 55, 000 learners’ performance on a standardized test. The three-year
research and development programme6 included 62 mathematics teachers from Grades 3
to 9 from a variety of Johannesburg schools. The project involved teachers in analysing
learners’ errors on multiple choice items of the ICAS mathematics test. In the project
teachers mapped the ICAS test’s items onto the curriculum, analysed learners’ errors,
designed lessons, taught and reflected on their instructional practices, constructed test
items, all in a format of “professional learning communities”.7 With regard to its focus
on working with learners’ errors in evaluation data, the work done with teachers in the

5
For Hargreaves (2010), focussing teachers’ learning from data is important for building collegiality. He argues that the
future of collegiality may best be addressed by (inter alia) taking professional discussion and dialogue out of the privacy
of the classroom and basing it on visible public evidence and data of teachers’ performance and practices, such as shared
samples of student work or public presentations of student performance data (p.524).
6
There is currently a third phase of DIPIP which is located in certain schools following a very similar process with
teacher groups in these schools.
7
See Shalem, Sapire, Welch, Bialobrzeska, & Hellman, 2011. See also Brodie & Shalem, 2010.
11
DIPIP project is an exception. Its main challenge has been to design a set of meaningful
opportunities for teachers to reason about the evaluation data collated from the results of
55,000 learners’ who wrote the test, in each of the years that it ran (2006-2008). The
central conceptual questions we addressed in researching the results of this process are:
 What does the idea of teachers’ interpreting learner performance diagnostically, mean
in a context of a standardized assessment test?
 What do teachers, in fact, do when they interpret learners’ errors?
 In what ways can what they do be mapped on the domains of teacher knowledge?

In the field of mathematics, Prediger (2010) uses the notion of “diagnostic competence”
to distinguish reasoning about learners’ errors from merely grading their answers:
The notion of diagnostic competence (that, in English, might have some medical
connotations) is used for conceptualizing a teacher’s competence to analyse and
understand student thinking and learning processes without immediately grading them.
(p76)

Prediger (2010) argues that teachers seem to be better interpreters of learners’ errors
when they have an interest in learners’ rationality, are aware of approaches to learning,
interrogate meanings of concepts (in contrast to just knowing their definitions), and have
studied domain specific mathematical knowledge. His view is consistent with Shepard’s
work on formative assessment, in particular with the idea of using insights from student
work, formatively, to adjust instruction (2009, p34). Research in mathematics education
has shown that a focus on errors, as evidence of reasonable and interesting mathematical
thinking on the part of learners, helps teachers to understand learner thinking, to adjust
the ways they engage with learners in the classroom situation, as well as to revise their
teaching approach (Borasi, 1994; Nesher, 1987; Smith, DiSessa & Roschelle, 1993). The
field of Maths education has developed various classifications of learners’ errors
(Radatz, 1979; Brodie & Berger, 2010) but there is hardly any work on what kinds of
criteria can be used to assess teacher knowledge of error analysis. There is no literature
that examines the ways teachers’ reason about evaluation data. There is hardly any
literature that shows what would be involved in assessing the quality of teachers’
reasoning about evaluation data gathered from a systematic assessment.8

In this report we hope to show a way of assessing teachers’ knowledge of error analysis
in relation to specific criteria and mathematical content. Our central claim is that
without teachers’ being able to develop this kind of knowledge the “empirical evidence
on what the learner can and/or cannot do at a particular stage or grade”,9 will remain
8
Peng and Luo (2009) and Peng (2010) present one attempt to classify the tasks teachers engage with when they analyse
learners’ errors (identifying, addressing, diagnosing, and correcting errors).
9
Media statement issued by the Department of Basic Education on the Annual National Assessments (ANA): 04 February
2011. http://www.education.gov.za/Newsroom/MediaReleases/tabid/347/ctl/Details/mid/1389/ItemID/3148/Default.aspx
12
non-integrated with teacher practice.

Report Plan
Section one
This is the conceptual section of the report. The conceptual aim of this report is to locate
the idea of error analysis in teacher knowledge research, in order to develop criteria for
assessing what it is that teachers do when they interpret evaluation data. We develop
the idea of interpreting evaluation data diagnostically and show a way of mapping its
six constitutive aspects against the domains of teacher knowledge, as put forward by
Ball et al (2005, 2008) and Hill et al (2008a). By locating these aspects in the broader
discussion of teacher knowledge, we are able to demonstrate the specialisation involved
in the process of teachers’ reasoning about learners errors. We use the word ‘mapping’
consciously, because we believe that the task of error analysis requires both subject
matter knowledge and pedagogical content knowledge.

Section two
In this section we describe the error analysis activity, in the context of the DIPIP project.
We draw a distinction between different error analysis activities, between “error-focused
activities” and “error-related activities”. This distinction structures the sequence of
activities in the DIPIP project. We describe the time-line of the project, as some of the
activities were repeated twice and even three times throughout its three-year duration.
In this report we only examine the error analysis activity, which forms the central focus
of the “error-focused activities” and which was done twice. We refer to this time-line as
Round 1 and Round 2 and use it to compare teachers’ reasoning about learners’ errors
across these two rounds. The main difference between the two rounds is that in Round
1 the teachers worked in small groups led by a group leader, while in Round 2 the small
groups worked without a group leader. In comparing the two rounds, we are able to
suggest some ideas about the role of group leaders in this kind of professional
development project. In this section we also describe the ICAS test we used for the error
analysis activity and we provide information about the number of items analysed by the
groups in Round 1 and 2 as well as the training provided for the group leaders.

Section three
In this section we describe the methodology we followed for the evaluation of the
findings of the error analysis activity. We state the three central aims of the evaluation
of the error analysis activity. We then describe how we selected the items used as data
for the evaluation, the evaluation instrument, the coding template used to code the data,
the training of the coders and the coding process, including its validity and reliability.

13
We conclude with the description of the three-level approach we followed to analyse the
data.

Section four
This section presents what we call the level 1 analysis of findings. In this section we
detail the findings for each of the six criteria we selected for the analysis of teachers’
reasoning about the correct and the erroneous answers. This section of the report brings
us to the empirical aim of the report, which is to measure and analyze the quality of
teachers’ ability to do diagnostic error analysis, across the constitutive aspects of
interpreting evaluation data. The empirical analysis intends to answer the following
three empirical questions:
 On what criteria is the groups’ interpretation weaker and on what criteria is the
groups’ interpretation stronger?
 In which mathematical content areas do teachers produce better judgement on
the errors?
 Is there a difference in the above between two sets of grouped grades (Grade 3-6
and Grade 7-9)?

Section five
This section presents what we call the level 2 analysis of findings. In this section we
begin to draw together broader findings across criteria and groups. We present several
findings that help us construct an argument about the relationship between subject
matter knowledge and pedagogical content knowledge as it pertains to error analysis.
We also present findings that compare the two grouped grades (Grade 3-6 and Grade 7-
9) and describe the role of the group leaders. We conclude with some implications for
the idea of interpreting evaluation data diagnostically.

Section six
This section presents what we call the level 3 analysis of findings. In terms of the
professional development aspect of the report, the discussion in this section begins with
the ways in which teachers could be involved in analysing evaluation data. Then, based
on the findings of the error analysis activity and the mapping of error analysis against
the domains of teacher knowledge, we conclude with a short description of “diagnostic
judgment”. Diagnostic judgement is the construct we propose for describing what
teachers do when they interpret evaluation data. We argue that understanding teachers’
professionalism in this way may help the discourse to shift away from the dominance of
“accounting” to “accountability”.

We conclude with lessons to be learned from the project and recommendations both for
14
professional development and research.

15
Section One: Teacher knowledge of mathematical
errors

16
No matter how the South African curriculum debate of the last 10 years will be resolved
or the change away from OBE to a content-based curriculum is enacted, no matter what
educational philosophy informs our debate on good teaching, the research on teacher
knowledge insists that teachers today are expected (or should be able) to make sound
judgments on sequence, pacing and evaluative criteria so as to understand learners’
reasoning and to inform learners’ learning progression (Muller, 2006; Rusznyak, 2011;
Shalem & Slonimsky, 2011).

The central question that frames studies on teacher knowledge, in the field of
mathematics education, goes as follows: “is there a professional knowledge of
mathematics for teaching which is tailored to the work teachers do with curriculum
materials, instruction and students?” (Ball, Hill & Bass, 2005, p16). This question draws
on Shulman’s work, specifically, on his attempt to define “knowledge-in-use in
teaching”. Shulman’s main project was to situate subject matter knowledge within the
broader typology of professional knowledge.

Continuing within this tradition Ball, Thames and Phelps (2008) elaborate Shulman’s
notion of pedagogical content knowledge and its relation to subject matter knowledge
for mathematics teaching. Ball, Thames and Phelps define mathematical knowledge for
teaching as “the mathematical knowledge needed to carry out the work of teaching
mathematics” (p395). In order to investigate what this in fact means, they collected
extensive records of specific episodes, analyses of curriculum materials, and examples of
student work. They also draw on Ball’s personal experience of teaching and researching
in a Grade 3 mathematics class for a year. On the basis of these resources, they then
developed over 250 multiple-choice items “designed to measure teachers’ common and
specialized mathematical knowledge for teaching” (Ball, Hill & Bass 2005, p43). Some of
the items emphasise mathematical reasoning alone and others include the more
specialized knowledge for teaching (Hill, Ball & Schilling, 2008, p376; Ball, Hill & Bass
2005, pp 22 and 43). They then conducted large-scale surveys of thousands of practicing
teachers as well as interviews with smaller number of teachers. They wanted to know
“what ‘average’ teachers know about students’ mathematical thinking” (Hill, Ball &
Schilling, 2008, p376), whether this specialized knowledge is different from
mathematical reasoning and whether it can be shown to affect the quality of instruction
(Ball et al, 2008b).

Their work is summarised in several papers, showing that teachers’ mathematical


knowledge consists of four key domains (Ball, Hill & Bass, 2005; Ball, Thames & Phelps
2008; Hill, Ball & Schilling, 2008).10 The first two domains elaborate the specialisation of
subject-matter knowledge (“common content knowledge” and “specialized content
knowledge”). The second two domains elaborate the specialisation involved in teaching
10
They also include two other domains, Knowledge of Curriculum and Knowledge at the mathematical horizon.
17
mathematics from the perceptive of students, curriculum and pedagogy. These domains
(“knowledge of content and students” and “knowledge of content and teaching”)
elaborate Shulman’s notion of “pedagogical content knowledge”. 11

Although there is some debate on the specificity of the domains of knowledge in the
realm of teachers’ knowledge in mathematics and other authors have expanded on them
in a number of academic papers (e.g. Adler, 2005), for the purpose of the DIPIP error
analysis evaluation we use Ball’s classification of the domains and in what follows we
present each of the domains, by foregrounding key aspects of ‘error analysis’ relevant to
each of the domains. We also formulate the criteria that are aligned with the domain and
explain why. The following table presents the way we mapped the evaluation criteria in
relation to teacher knowledge.

Table 1: Domains of teacher knowledge and related error analysis categories

Subject Matter Knowledge Pedagogical Content Knowledge

Knowledge domain Knowledge Knowledge domain Knowledge domain


domain
Common content Specialized content Knowledge of content Knowledge of content
knowledge (CCK) knowledge (SCK) and students (KCS) and teaching (KCT)
DIPIP category DIPIP category DIPIP category DIPIP category
Procedural Awareness of Diagnostic reasoning N/A (only in lesson
understanding of Errors of learners’ thinking design and
correct answers in relation to errors teaching)

Conceptual Use of everyday links


understanding of in explanations of
correct answers errors

Multiple explanations
of errors

11
The principle point behind this typology of the 4 domains is that mathematics teaching is a specialized practice, which
combines mathematical and pedagogical perspectives. The argument behind this work is that a specialized mathematical
perspective includes things like “determining the validity of mathematical arguments or selecting a mathematically
appropriate representation”, (Ball, Thames and Bass, 2008 p398) but also “skills, habits of mind, and insight” (p399) or
mathematical reasoning. The specialized pedagogical perspective on the other hand requires knowledge of curriculum, of
learners and of teaching - it focuses on learners’ reasoning and requires specialised pedagogical tasks.
18
The first domain is common content knowledge, which is general subject matter knowledge
(CCK). Ball, Thames and Bass (2008) define this domain of teacher knowledge as
follows:
Teachers need to know the material they teach; they must recognize when their students
give wrong answers or when the textbook gives an inaccurate definition. When teachers
write on the board, they need to use terms and notation correctly. In short, they must be
able to do the work that they assign their students. But some of this requires
mathematical knowledge and skill that others have as well—thus, it is not special to the
work of teaching. By “common,” however, we do not mean to suggest that everyone has
this knowledge. Rather, we mean to indicate that this is knowledge of a kind used in a
wide variety of settings—in other words, not unique to teaching. (pp. 398-399)

The argument here is that like any other mathematician, a mathematics teacher with
good subject matter knowledge uses mathematical terms correctly, is able to follow a
procedure fully, and can evaluate that a textbook defines a mathematical term correctly
or incorrectly. In terms of error analysis, this knowledge is about recognizing that a
learner’s answer is correct or not. Recognition of errors is a necessary component of
teachers’ content knowledge. It is necessary, as it shows the “boundaries of the practice”
of doing mathematics, of “what is acceptable and what is not” (Brodie, 2011, p66). The
underlying condition, i.e. the pre-condition that enables teachers to recognize error
mathematically, is for teachers to be able to explain the mathematical solutions of the
problem, both procedurally and conceptually. The emphasis in this domain is on
teachers’ ability to explain the correct answer. Recognizing “when their students give
wrong answers or when the textbook gives an inaccurate definition” relies on knowing
the explanation of a solution in full. Without a full knowledge of the explanation,
teachers may recognize the error only partially. “Only partial”, because they may not
know what the crucial steps that make up the solution are or what their sequence needs
to be (procedural explanations). “Only partial”, because they may not know what the
underlying conceptual links are that the student needs to acquire in order not to err.

Although Ball et al argue that there are some aspects of what teachers know which is not
unique to teaching, and although, one would argue, it is not expected that
mathematicians will distinguish procedural from conceptual explanations when they
address a mathematical solution, these two kinds of explanations are essential aspects of
teachers’ content knowledge and are the enablers of recognition of learners’ errors. Star
notes that “it is generally agreed that knowledge of concepts and knowledge of
procedures are positively correlated and that the two are learned in tandem rather than
independently” (2000, p80). The first aspect of mathematical knowledge that teachers
need in order to “carry out the work of teaching mathematics”, specifically of error
analysis, is content knowledge. This involves both knowledge of the crucial sequenced

19
steps needed to get to the correct answer (procedural knowledge) and their conceptual
links (conceptual knowledge) 12. Because this knowledge underlies recognition of error,
we call it “content knowledge” (irrespective of whether it is common or not to
mathematical experts in general). This means we included two criteria under common
content knowledge.

Criteria for recognizing Common Content Knowledge:


Procedural understanding
The emphasis of the criterion is on the quality of the teachers’ procedural explanations
when discussing the solution to a mathematical problem. Teaching mathematics
involves a great deal of procedural explanation which should be done fully and
accurately for the learners to grasp and become competent in working with the
procedures themselves.

Conceptual understanding
The emphasis of the criterion is on the quality of the teachers’ conceptual links made in
their explanations when discussing the solution to a mathematical problem. Teaching
mathematics involves conceptual explanations which should be made with as many
links as possible and in such a way that concepts can be generalised by learners and
applied correctly in a variety of contexts.

The second domain is Specialised Content Knowledge (SCK), which is mathematical


knowledge specific to teaching and which, according to Ball et al, general
mathematicians do not need. A different aspect of error analysis is located in this
domain, which involves activities such as,“looking for patterns in student errors or …
sizing up whether a nonstandard approach would work in general” (Ball, Thames &
Bass, 2008, p400). Whereas teacher knowledge of the full explanation of the correct
answer enables a teacher to spot the error, teacher knowledge of mathematical
knowledge for teaching enables a teacher to interpret a learner’s solution and evaluate
its plausibility, by recognizing the missing steps and /or conceptual links and taking into
account, we would add, other factors such as the age of the learner, or the time the
learner was given to complete the task, the complexity in the design of the question etc.
Notwithstanding our addition, the main idea about “sizing up whether a nonstandard
approach would work in general”, assumes that teachers recognize errors relationally.
They evaluate the “nonstandard approach” and/or “the error” in relation to what is
generally considered the correct approach, taking into account the context of the

12
Some mathematical problems lend themselves more to procedural explanations while in others the procedural and the
conceptual are more closely linked. There is a progression in mathematical concepts – so that what may be conceptual for
a grade 3 learner (for example, basic addition of single digit numbers) is procedural for a grade 9 learner who will have
progressed to operations at a higher level.
20
“nonstandard approach” and/or the error. In Ball et al’s words, knowledge of this
domain enables teachers to “size up the source of a mathematical error” (Ball, Thames &
Bass, 2008, p397) and identify what mathematical step/s would produce a particular
error.
Error analysis is a common practice among mathematicians in the course of their own
work; the task in teaching differs only in that it focuses on the errors produced by
learners… Teachers confront all kinds of student solutions. They have to figure out what
students have done, whether the thinking is mathematically correct for the problem, and
whether the approach would work in general. (ibid)13

It is important to emphasise that although, as Ball et al define above, the knowledge of


errors in this domain focuses on what “students have done”, teachers’ reasoning about
the error is mathematical (i.e. not pedagogical) in the main. In their analysis it forms the
second domain of teachers’ content knowledge:
deciding whether a method or procedure would work in general requires mathematical
knowledge and skill, not knowledge of students or teaching. It is a form of mathematical
problem solving used in the work of teaching. Likewise, determining the validity of a
mathematical argument, or selecting a mathematically appropriate representation,
requires mathematical knowledge and skill important for teaching yet not entailing
knowledge of students or teaching. (p398)

Ball, Thames and Bass characterize this type of knowledge as “decompressed


mathematical knowledge” (2008b, p400) which a teacher uses when s/he unpacks a topic
for the learner or make “features of particular content visible to and learned by
students” (ibid, see also Prediger, 2000, p79).
Teaching about place value, for example, requires understanding the place-value system
in a self-conscious way that goes beyond the kind of tacit understanding of place value
needed by most people. Teachers, however, must be able to talk explicitly about how
mathematical language is used (e.g., how the mathematical meaning of edge is different
from the everyday reference to the edge of a table); how to choose, make, and use
mathematical representations effectively (e.g., recognizing advantages and disadvantages
of using rectangles or circles to compare fractions); and how to explain and justify one’s
mathematical ideas (e.g., why you invert and multiply to divide fractions). All of these
are examples of ways in which teachers work with mathematics in its decompressed or
unpacked form. (ibid)

13
So, for example (Ball, 2011, Presentation), when marking their learners’ work, teachers need to judge and be able to
explain to the learners which definition of a concept (‘rectangle’, in the following example) is more accurate:

 a rectangle is a figure with four straight sides, two long and two shorter
 a rectangle is a shape with exactly four connected straight line segments meeting at right angles
 a rectangle is flat, and has four straight line segments, four square corners, and it is closed all the way around.
21
In relation to Shulman’s distinction between subject matter knowledge and pedagogical
content knowledge, Ball, Thames and Bass, (2008) argue that the above two domains are
framed, primarily, by subject matter knowledge. This is very important from the
perspective of examining teachers’ interpretation of evaluation data. As Peng and Luo
(2009) argue, if teachers identify learner’s errors but interpret them with wrong
mathematical knowledge, their evaluation of student performance or their plan for a
teaching intervention are both meaningless. In other words, the tasks that teachers
engage with, in error analysis, such as sizing up the error or interpreting the source of its
production, are possible because of the mathematical reasoning that these domains of
teacher knowledge equip them with. The idea behind this is that strong mathematics
teachers recruit content knowledge into an analysis of a teaching situation and do so by
recruiting their “mathematical reasoning” more than their knowledge of students,
teaching or curriculum. This mathematical reasoning enables strong mathematics
teachers to size up the error from the perspective of socializing others into the general
field of mathematics. We included one criterion under Specialized Content Knowledge.

Criteria for recognizing Specialized Content Knowledge


Awareness of error
This criterion focuses on teachers’ explanations of the actual mathematical error and not
on learners’ reasoning. The emphasis in the criterion is on the mathematical quality of
teachers’ explanations of the actual mathematical error.

The third domain of Knowledge of Content and Students orients teachers’ mathematical
perspective to the kind of knowing typical of learners of different ages and social
contexts in specific mathematical topics. Teachers develop this orientation from their
teaching experience and from specialized educational knowledge of typical
misconceptions that learners develop when they learn specific topics. Ball, Thames and
Bass (2008) state as examples of this domain of teacher knowledge, “the kinds of shapes
young students are likely to identify as triangles, the likelihood that they may write 405
for 45, and problems where confusion between area and perimeter lead to erroneous
answers” (p401). This knowledge also includes common misinterpretation of specific
topics or levels of the development in representing a mathematical construct (e.g. van
Hiele levels of development of geometric thinking). Teacher knowledge in this domain
contains “knowledge that observant teachers might glean from working with students,
but that have not been codified in the literature” (Hill, Ball & Schilling, 2008, p378). So
for example, after years of teaching, teachers gain an understanding of how to define a
mathematical concept that is both accurate and appropriate to Grade 2 learners, they
come to know what aspect of the definition of a mathematical concept is more difficult
for these learners, and how the learners’ everyday knowledge (of the particular age
group) can come in the way of acquiring the specialised mathematical knowledge (Ball,
2011).
22
From the point of error analysis, this knowledge domain involves knowing specific
mathematical content from the perspective of how learners typically learn the topic or
“the mistakes or misconceptions that commonly arise during the process” of learning
the topic (Hill, Ball & Schilling, 2008, p375). Ball, Thames and Bass (2008) emphasise
that this type of knowledge builds on the above two specialized domains but yet it is
distinct:
… Recognizing a wrong answer is common content knowledge (CCK), whereas sizing up
the nature of an error, especially an unfamiliar error, typically requires nimbleness in
thinking about numbers, attention to patterns, and flexible thinking about meaning in
ways that are distinctive of specialized content knowledge (SCK). In contrast, familiarity
with common errors and deciding which of several errors students are most likely to
make are examples of knowledge of content and students (KCS). (p401)

The knowledge of this domain enables teachers to explain and provide a rationale for
the way the learners were reasoning when they produced the error. Since it is focused
on learners’ reasoning, it includes, we argue, the ability to provide multiple explanations
of the error. Because contexts of learning (such as age and social background) affect
understanding and because in some topics the learning develops through initial
misconceptions, teachers will need to develop a repertoire of explanations, with a view
to addressing differences in the classroom.

How is this knowledge about error analysis different from the first two domains? The
first two domains, being subject-matter based, combine knowledge of the correct
solution, (which includes both a knowledge of the procedure to be taken as well as the
underlying concept, “Common Content Knowledge”) with knowledge about errors
(“Specialized Content Knowledge” which is focused on the relation between the error /
the non-standard solution and the correct answer / the answer that is generally
accepted). The social context of making the error is secondary to the analysis of the
mathematical content knowledge involved in explanting the solution or the error and
therefore is back-grounded. “Knowledge of content and students”, the third domain, is
predominantly concerned with context-specific experiential knowledge about error that
teachers develop, drawing on their knowledge of students (Hill et al, 2008b, p 385).14
The diagnostic aspect of this domain is focused on learner reasoning and the general
mathematical content knowledge of the correct answer is secondary and therefore is
back-grounded. We included three criteria under Knowledge of Content and Students.

14
Hill et al discuss the difficulties they found in measuring KCS. They believe that “logically, teachers must be able to
examine and interpret the mathematics behind student errors prior to invoking knowledge of how students went astray”
(2008b, p390). They found that teachers’ mathematical knowledge and reasoning (first and second domains) compensate
when their knowledge of content and student is weak. This has implications for their attempt to measure the
distinctiveness of KCS, which is beyond the scope of this report.
23
Criteria for recognizing Knowledge of Content and Students
Diagnostic reasoning
The idea of error analysis goes beyond identifying the actual mathematical error
(“awareness of error”). The idea is to understand how teachers go beyond the
mathematical error and follow the way learners were reasoning when they produced the
error. The emphasis in this criterion is on the quality of the teachers’ attempt to provide
a rationale for how learners were reasoning mathematically when they chose a
distracter.

Use of everyday knowledge


Teachers sometimes explain why learners make mathematical errors by appealing to
everyday experiences that learners draw on and/or confuse with the mathematical
context of the question. The emphasis in this criterion is on the quality of the use of
everyday knowledge in the explanation of the error, judged by the links made to the
mathematical understanding that the teachers attempt to advance.

Multiple explanations of error


One of the challenges in the teaching of mathematics is that learners need to hear more
than one explanation of the error. This is because some explanations are more accurate
or more accessible than others and errors may need to be explained in different ways for
different learners. This criterion examines the teachers’ ability to offer alternative
explanations of the error when they are engaging with learners’ errors.

Knowledge of Content and Teaching (KCT) is the fourth domain of teacher knowledge.
This type of knowledge links between subject matter knowledge content (CCK+SCK)
and knowledge about instruction, which takes into account knowledge about students
(KCS). Based on their knowledge of these three domains, teachers use their knowledge
of teaching to decide on the sequence and pacing of lesson content or on things such as
which learner’s contribution to take up and which to ignore. This domain of knowledge
includes “knowledge of teaching moves”, such as “how best to build on student
mathematical thinking or how to remedy student errors” (Hill et al, 2008b, p378). As
this domain is concerned with teachers’ actively teaching it falls outside the scope of this
report.

It is worth noting that the four domains follow a sequence. The idea of a sequence
between the first two subject knowledge-based domains and the second two
pedagogical content knowledge-based domains is important for its implication for
teacher development of error analysis. It suggests that the first three domains, when

24
contextualized in approaches to teaching and instruction, equip teachers to design their
teaching environment. It is only when teachers have learnt to understand patterns of
error, to evaluate non-standard solutions, or to unpack a procedure, that they will be
able to anticipate errors in their teaching or in their assessment, and prepare for these in
advance. Research (Heritage et al, 2009) suggests that the move from error analysis to
lesson planning is very difficult for teachers. The sequence of the domains in this model
suggests the importance of knowledge about error analysis for developing mathematical
knowledge for teaching.

25
Section two: Activity and Process

26
2.1. The Activity
DIPIP engaged the teachers in six activities. These can be divided into two types: error-
focused activities and error-related activities. Error-focused activities engaged the teachers
directly in error analysis. Error-related activities engaged teachers in activities that built
on the error analysis but were focused on learning and teaching more broadly.

Table 2 “Error-focused activities” and “error-related activities”

Error-focused activities Error-related activities


 Analysis of learner results on the ICAS  Mapping of ICAS test items in relation
mathematics tests (with a focus on the to the South African mathematics
multiple choice questions and the curriculum;
possible reasons behind the errors that  Development of lesson plans which
led to learner choices of the engaged with learners’ errors in
distractors)15 relation to two mathematical concepts
 Analysis of learners’ errors on tests (equal sign; visualisation and problem
that were designed by the teachers solving)
 An interview with one learner to probe  Teaching the lesson/s and reflecting on
his/her mathematical reasoning in one’s teaching in "small grade-level
relation to errors made in the test. groups” and presenting it to “large
groups16

In the course of Phases 1 and 2 of the DIPIP project, the error analysis activity followed
after the curriculum mapping activity17 and structured a context for further professional
conversations among teachers about the ICAS and other test data. Teachers used the
tests to analyse the correct answers and also the errors embedded in the distractors

15
“Distractors” are the three or four incorrect answers in multiple choice test items. They are designed to be close enough
to the correct answer to ‘distract’ the person answering the question.
16
The small groups consisted of a group leader (a mathematics specialist – Wits School of Education staff member or post
graduate student who could contribute knowledge from outside the workplace), a Gauteng Department of Education
(GDE) mathematics subject facilitator/advisor and two or three mathematics teachers (from the same grade but from
different schools). This meant that the groups were structured to include different authorities and different kinds of
knowledge bases. These were called small grade-level groups (or groups). As professional learning communities, the
groups worked together for a long period of time (weekly meetings during term time at the Wits Education Campus for
up to three years), sharing ideas and learning from each other and exposing their practice to each other. In these close knit
communities, teachers worked collaboratively on curriculum mapping, error analysis, lesson and interview planning, test
setting and reflection. For certain tasks (such as presenting lesson plans, video clips of lessons taught or video clips of
learner interviews) the groups were asked to present to large groups. A large group consisted of the grade-level groups
coming together into larger combined groups, each consisting of four to six small groups (henceforth the large groups).
This further expanded the opportunities for learning across traditional boundaries. (See Shalem, Sapire, Welch,
Bialobrzeska, & Hellman, 2011, pp.5-6)
17
See the full Curriculum Mapping report, Shalem, Y., & Sapire, I. (2011)
27
presented in the multiple choice options and the incorrect solutions given by learners on
the open tests.

2.2 The ICAS test


To provide a basis for systematic analysis of learners’ errors, the results of Gauteng
learners on an international, standardized, multiple-choice test, the ICAS test, were
used. The International Competitions and Assessments for Schools (ICAS) test is
conducted by Educational Assessment Australia (EAA), University of New South Wales
(UNSW) Global Pty Limited. Students from over 20 countries in Asia, Africa, Europe,
the Pacific and the USA participate in ICAS each year. EAA produces ICAS papers that
test students in a range of subject areas including Mathematics. Certain schools in
Gauteng province, both public and private schools, used the ICAS tests in 2006, 2007 and
2008, and it was the results of these learners on the 2006 and 2007 tests that provided the
starting point for teacher engagement with learners’ errors. The ICAS test includes
multiple choice and some open items. The Grade 3-6 tests consist of 40 multiple choice
questions and the Grade 6-11 tests consist of 35 multiple choice items and 5 open
questions.

2.3 Time line


The first round of the error analysis activity ran for ten weeks, from July 2008 to October
2008. In this round, which we refer to as Round 1, the teachers analysed data from the
ICAS 2006 tests. As in the curriculum mapping activity, the groups worked on either
the even or odd numbered items (the same items for which they had completed the
curriculum mapping activity). The second round of error analysis ran for five weeks in
September and October 2010. In this round, which we refer to as Round 2, the teachers
in their groups analysed data from the ICAS 2007 tests as well as tests which they had
set themselves. The number of items analysed in Round 2 had to be cut down because of
time limits.

2.4 The Process


Round 1 error analysis gave the teachers the opportunity in their groups to
discuss the mathematical reasoning that is required to select the correct option in
the ICAS 2006 multiple choice test items, as well as to provide explanations for
learners’ choices of each of the distractors (incorrect answers). In order to
deepen the teachers’ conceptual understanding and appreciation of learners’
errors, the groups had to provide several explanations for the choices learners
made. This was intended to develop a more differentiated understanding of
reasons underlying learners’ errors amongst the teachers. The focus of the error
analysis in this Round was on the multiple choice items because the test

28
designers provided a statistical analysis of learner responses for these, which
served as the starting point for teachers’ analysis. Some groups also discussed
the open ended items. Groups had to analyse either odd or even numbered
items, as they had done in the curriculum mapping activity. Some groups
analysed more items than others, but all completed the analysis of the 20
odd/even numbered items they were assigned. Table 3 below18, lists the material
given to the groups, the error analysis task and the number and types of groups
(small or large or both) in which the teachers worked in Round 1 error analysis.

Table 3: Round 1 Error analysis

Material Tasks Group type


Exemplar templates Group leader training: Group leaders
(See Appendix 1)  Discussion of error analysis of selected with project
Guidelines for the items to generate completed template for team
completion of the the groups to work with.
error analysis
template
ICAS achievement Grading items by difficulty level. Our Project team
results for Gauteng method for grading item difficulty was as
follows: The achievement stats for learners,
who wrote the ICAS tests in the Gauteng
sample were entered onto excel sheets and
items were graded from the least to the most
difficult, based on this data. The items were
thus all assigned a “difficulty level” from 1
to 40. 19

Test items and answer Error analysis of learner performance using 14 small
key with learner template: grade-specific
performance data (per  Identification of the mathematical groups
item) from the 2006 reasoning behind the correct response
ICAS test.
 Provision of possible explanations for
Statistical analysis of learners’ choice of each of the three
learner performance distracters.
for correct answer.
3 distractors
18
For more detail see Shalem, Sapire, Welch, Bialobrzeska, & Hellman, 2011
19
Out initial plan was to only work on a selected number of tests items (ICAS 2006). We were planning to choose the
items that proved to be most difficult for the learners who wrote the test at the time (55,000 learners in Gauteng from both
public and independent schools). At the time of the start of the project (last quarter of 2007), the EAA’s Rasch analysis of
each of the tests items was not available.
29
Material Tasks Group type
Error analysis
template. (see
Appendix 2)

In Round 2 Error analysis eleven20 of the groups repeated the error analysis and
curriculum mapping activities on selected ICAS 2007 items or on their own tests21, but
without the facilitation of the group leader. This was intended to establish how
effectively the experiences of Round 1 error analysis and other project activities had
enabled the teachers, in their groups, to work independently. Table 4 below22, lists the
material given to the groups, the error analysis task and the number and types of group
in which the teachers worked in Round 2 error analysis.

Table 4: Round 2 Curriculum Mapping and Error Analysis

Material Tasks Group type


ICAS 2007 test items. Error analysis of ICAS 2007 tests. 6 groups (One
Error analysis The groups were assigned 12 items for of each of
template with analysis. Not all the small groups Grades 3, 4, 5,
curriculum alignment completed the analysis of these items. The 6, 7, and 9)
built into the template small groups conducted the same analysis as
they did for the ICAS 2006 test items.
Error analysis Error analysis of “own tests”. As with the 5 groups on
template including ICAS, the groups were asked own test items
curriculum mapping,  to pool together the results of all the (One of each
which was modified learners in their class and to work out the of Grades 3, 4,
in order to achievement statistics on the test 5, 6 and 8)
accommodate
 to place the results on a scale and on that
analysis of “own
basis judge the difficulty of each item
tests” (see Appendix
 to analyse the ways in which the learners
3)23
got the correct answer to each of the
questions as well as the ways the learners

20
Round 2 error analysis was completed by 11 of the 14 teacher groups since three small groups did a third round of
teaching in the last phase of the project.
21
See Shalem, Sapire, Welch, Bialobrzeska, & Hellman, 2011 on the process and the design of “own tests”
22
For more detail see Shalem, Sapire, Welch, Bialobrzeska, & Hellman, 2011.
23
Due to time constraints, as well as to the fact that a further round of teaching was not going to follow this exercise, the
revised template did not include a question on whether and how teachers taught the concept underlying the test item.
30
Material Tasks Group type
got each of the items wrong.
Verbal instructions to Error analysis presentations: Three Large
groups All groups were requested to present the groups
error analysis of one item to the larger Grades 3 and
groups. 4, Grades 5
Group leaders were invited to attend the and 6 and
presentation. Grades 7 and
9.

2.5 Group leader training


A plenary meeting with group leaders for the error analysis mapping activity was held
prior to the commencement of the small group sessions. The session was led by the
project leader, Prof Karin Brodie, with input taken from all group leaders.

When the error analysis activity was presented to the groups, there was a briefing
session where all groups sat together, before they spilt into their smaller groups and
started on the activity. Project coordinators moved around and answered questions
while groups worked. The groups were encouraged to complete templates in full and
settled queries where possible. When the first templates were returned, groups were
given feedback and asked to add/amend their templates as necessary – to try for even
better completion of templates. Some of the group discussions were recorded24. After
the activity was completed, we conducted two large group de-briefing discussions (one
with the Grade 3, 4, 5 and 6 groups and one with the Grades 7, 8 and 9 groups). These
debriefing sessions were recorded. This report is based on the written templates, not on
the recordings.

2.6 Item analysis


The error analysis component of the DIPIP project resulted in a large number of
completed templates in which groups had recorded their analysis of the items. This
involved analysis of both correct and incorrect solutions.

Table 5 summarises the total number of items analysed by the groups during Round 1
and 2. Groups are called “odd” and “even” according to the test item numbers assigned
to them for initial curriculum mapping in Round 1 (see Column Two). Column Three
details the number of ICAS 2006 items analysed by the groups in Round 1. Column
Four details the number of ICAS 2007 / Own tests analysed by the groups in Round 2.

24
Three groups were recorded in Round 1 and eleven groups were recorded in Round 2.
31
Table 5: Number of items analysed by groups (Rounds 1 and 2)

Grade Numbers Round 1 Round 2


(ICAS 2006 test) (ICAS 2007 test/Own test)
3 Odd 17 Own test (6)
3 Even 16 ICAS (10)
4 Odd 20 Own test (5)
4 Even 20 ICAS (11)
5 Odd 20 Own test (6)
5 Even 20 ICAS (4)
6 Odd 20 Own test (4)
6 Even 20 ICAS (11)
7 Odd 20 Not done
7 Even 20 ICAS (8)
8 Odd 11 Own test (6)
8 Even 13 Not done
9 Odd 16 ICAS (11)
9 Even 17 Not done
Total 250 items 82 items

Round 1 error analysis produced a much bigger set of data (250 items) than Round 2 (82
items). In Round 1 certain groups analysed more items than others. The expectation for
error analysis of items in Round 1 was a minimum of 20 items per group. No group
completed more than 20 ICAS 2006 items, although some groups assisted other groups
with editing and improving their analysis when they had completed the 20 items
allocated to their group. Grade 6 – 9 ICAS tests include open ended questions. Some of
the groups chose not to analyse these items. This can be seen in the Grade 8 and 9
groups that mapped less than 20 items. Neither of the Grade 3 groups completed the
mapping of all 20 even/odd items. The expectation for error analysis of items in Round
2 was a minimum of 12 ICAS 2007 items and 6 “own test” items per group. No group
completed all the 12 ICAS 2007 items assigned to it. Three of the six groups completed
11 items. Three of the six groups that analysed their own test completed all the 6
questions.

32
In summary, in Round 2 only 11 of the 14 groups participated and they analysed fewer
items due to time constraints. Some groups in Round 2 analysed the ICAS 2007 test
while others analysed tests that they had set themselves. The membership of most of the
groups that continued across two rounds of error analysis remained. The format of the
error analysis was the same for both rounds. Groups received feedback in both rounds.
The main difference between rounds was that group leaders were not present in Round
1. In between the two rounds all of the groups were involved in lesson design, teaching
and reflection where learners’ errors played a significant role in planning, teaching and
reflecting on teaching. It was intended that in coming back to error analysis in Round 2
the groups would be more familiar with the idea of error analysis both in teaching and
related activities. In view of the differences between the two rounds comparisons
between rounds are possible, although they should be taken tentatively. The comparison
does imply any inferences about the impact of Round 1 activities on those of Round 2.

33
Section three: Evaluation analysis methodology

34
3.1 Aim of the evaluation
We aim to evaluate the quality of error analysis done by the groups. In this we aim to
evaluate the quality of teachers reasoning about error, evident in the group texts (both in
the “correct analysis texts” and in the “error analysis texts”) and the value of the group
leaders in leading the process.

The empirical analysis intends to answer the following four empirical questions:
 On what criteria is the groups’ error analysis weaker and on what criteria is the
groups’ error analysis stronger?
 In which mathematical content areas do the groups produce better judgements on
the error?
 Is there a difference in the above between primary (Grade 3-6 groups) and high
school (Grade 7-9 groups) teachers?
 What does the change in performance between Round 1 and 2 suggest about the role
of group leaders?

3.2 Items evaluated


For the purpose of analysis, we selected a sample of ten ICAS 2006 items per grade per
group for Round 1. These ten items (per group) were made up of five items from the
first half of the test and five items from the second half of the test. Since fewer items
were mapped in Round 2 mapping activity, all the items mapped by groups that
participated in Round 2 were analysed. Only one distractor (or incorrect answer) per
item was selected for the evaluation analysis. The selected distractors were the ones that
were most highly selected according to the test data, in other words, the “most popular”
incorrect learner choice. Table 6 below summarises the sample of data used in the
evaluation analysis of Rounds 1 and 2 error analyses:

Table 6: Sample summary

Full set of data to be Sample


considered
Round 1: ICAS 2006 items: 10 items selected per group, hence 20
items selected per grade. The selected items were chosen
Completed error analysis
to represent the curriculum.
templates based on ICAS
2006 test data Groups: All groups Grades 3-9 (14), 140 items in sample.
All texts related to the correct answer and the chosen
distractor of the selected items formed the data which
(Grades 3-9, 14 groups)
was analysed – from Round 1 there were 572 texts coded
35
Full set of data to be Sample
considered
(316 answer texts and 252 error texts).
Round 2: ICAS 2007: Some groups Grades 3-9 (6), mapped items
selected per group, various item numbers coded per
Completed error analysis
grade.
templates based on ICAS
2007 test as well as data “Own tests”: Some groups Grades 3-9 (5), mapped items
from tests which groups from tests that they had set as groups for the learner
had developed “own test” interview activity, various item numbers coded per
data grade.
82 items in sample.
(Grades 3-9, 11 groups) All texts related to the correct answer and the chosen
distractor of the selected items formed the data which
was analysed – from Round 2 there were 284 texts coded
(173 answer texts and 111 error texts).

The sample size in Round 2 was smaller than the sample size in Round 1. In both rounds
the number of answer texts was higher than the number of error texts.

3.3 Evaluation Instrument


The initial evaluation criteria and their corresponding categories were drawn up into a
template by three members of the project team. The template was then used to code the
full data sample, piecemeal, while on-going discussions were held to refine the criteria
over a four month period. Agreement was reached after the coding of four Round 1
grades was completed and the final format of the template and wording of the criteria
and categories was constructed. See more on the template below.

Two external coders, one expert in the field of maths teacher education and one an
experienced mathematician with experience in teacher education, coded the groups’
texts.
The coders were given group’s texts; to code. A group’s text is divided to two texts
types:
 “Correct answer texts” or the explanation provided by the group for the correct
answer.
 “Error answer texts” or the explanations provided by the group for the distractor
selected for the evaluation.

36
Groups provided more than one explanation for the correct answer or for the selected
distractor for some items. This meant that some items had one correct answer text and
one error text per the distractor while other items may have had several correct answer
texts and/or several error for the distractor. The coders were given all of the texts written
by the groups in explanation of the correct answer and of the selected distractor (error).
The texts were arranged for the coders for each item in a large excel coding sheet so that
all coders worked according to the same identification of texts. Coders were asked to
give a code to each text. In this way a code was assigned to each of the texts for each of
the items selected for the evaluation.

3.3.1 The coding template (see Appendix 4):


A coding template was prepared for the coders so that all coding related to the same
texts, and all the texts are arranged in the same format. The coding template included
general information about the group text, six columns of criteria and their
corresponding categories, and a final comment column.

One template per grade was prepared, with all the texts inserted into the template next
to the relevant item. The coding template for the ICAS test items consists of all the items
and all the texts (correct answer and error answer texts) associated with the item. The
coding template for “own test” was the same as that for the ICAS tests. The only
difference is that for the “own tests” template the actual incorrect answer to be discussed
was pasted into the “Distractor” column since it could not be represented by a key.

Instrument overview
General information on group texts:
 Grade – indicates the grade of the item texts.
 Item – indicates the item number for which the groups produced the correct answer
and the error answer texts.
 Maths Content area - indicates the curriculum area (number, pattern/algebra, space,
measurement or data handling) of the item.
 Distractor – indicates which distractor had been selected for analysis.
 Text number – indicates the number of the text to be coded. Numbering started from
one for each correct answer text and then from one again for each error text.
 Text – refers to the actual text as recorded by the groups in their completed
templates. The texts were pasted into the coding template, verbatim. Texts that could
not be pasted into the coding template were sent to coders in a separate document
attachment and referred to in the templates by grade and number.

37
Criterion code columns:
 In addition to the above information, the key aspect of the template is the criteria
and their relevant categories. The templates consist of six columns, which are related
to the six evaluation criteria selected. Each criterion is further divided into four
categories (“not present”, “inaccurate”, “partial” and “full”) that capture the quality
of the correct and the error answer text, in terms of teachers’ reasoning about the
correct answer and the distractor (or error in “own texts”). For purpose of simplicity
each category was coded with a number (4 assigned to “full” and 1 to “not present”).
Coders had to insert their coding decisions for each of the four categories into each
column, for each identified text.
 Procedural explanations – a code of 1 to 4 assigned to each text, which correspond to
the four categories. This criterion measures the fullness and the correctness of the
groups’ description of the procedure to be followed in order to answer the question.
 Conceptual explanations – a code of 1 to 4 assigned to each text, which correspond
to the four categories. This criterion measures the fullness and the correctness of the
groups’ description of the conceptual links that should be made in order to answer
the question.
 Awareness of error – a code of 1 to 4 assigned to each text, which correspond to the
four categories. This criterion measures general knowledge of the mathematical
error.
 Diagnostic reasoning – a code of 1 to 4 assigned to each text, which correspond to
the four categories. This criterion measures groups’ explanation of learner’ reasoning
behind the error.
 Use of everyday – a code of 1 to 4 assigned to each text, which correspond to the
four categories. This criterion measure groups use of everyday knowledge about
learner’ reasoning.
 Multiple explanations – a code of “n” or “f” assigned25 to each text from which
codes of 1 to 4 could be assigned to “multiple explanations” for each text. This
criterion measure the feasibility of the alternative explanations the groups offered to
explain leaners reasoning behind the error.
 Comment – The final comment allowed coders to write any particular comments
they wished to note about the answer or error text in that row. These comments
could be used in the alignment discussion and evaluation.

The coders inserted their codes according to the given column heading for criterion
coding. A wide range of exemplars are provided in the appendices in order to
demonstrate the operationalization of the criteria (see Appendices 6 – 11.). Exemplar
items were chosen from Round 1 and Round 2 and in such a way that a spread of grades
is represented with both strong and weak explanations. Items were also selected so that

25
“n” was assigned to a particular text when it was considered not mathematically feasible in relation to the error under
discussion and “f “was assigned when the explanation was considered mathematically feasible. The final code for
“multiple explanations” was decided on the number of feasible explanations given of the chosen distractor.
38
there are exemplars across all five of the mathematical content areas represented in the
curriculum.

The criteria
Whilst Section One offers explanations for the choice and the meaning of the criteria, we
offer a brief explanation in this section too. A wide range of exemplars are provided in
the appendices in order to demonstrate the operationalization of the criteria. Exemplar
items were chosen from Round 1 and Round 2 and in such a way that a spread of grades
is represented with both strong and weak explanations. Items were also selected so that
there are exemplars across all five of the mathematical content areas represented in the
curriculum. In what follows we provide a brief description of each criterion and the
Appendix number which includes the full wording of each criterion and its
corresponding four categories. For the full set of the six criteria and their categories see
Appendix 5

Answer Text Criteria


The first two criteria were used to code the groups’ explanations of the correct answers.
Every “correct answer text” was thus coded according to two criteria: procedural and
conceptual.

Procedural explanations
The literature emphasises that the quality of teachers’ explanations depends on the
balance they achieve between explaining the procedure required for addressing a
mathematical question and the mathematical concepts underlying the procedure. This
criterion aims to grade the quality of the teachers’ procedural explanations of the correct
answer. The emphasis in the criterion is on the quality of the teachers’ procedural
explanations when discussing the solution to a mathematical problem through engaging
with learner test data. Teaching mathematics involves a great deal of procedural
explanation which should be done fully and accurately for the learners to grasp and
become competent in working with the procedures themselves (see Appendix 6).

Conceptual explanations
The emphasis in this criterion is on the conceptual links made by the teachers in their
explanations of the learners’ mathematical reasoning in relation to the correct answer.
Mathematical procedures need to be unpacked and linked to the concepts to which they
relate in order for learners to understand the mathematics embedded in the procedure.
The emphasis of the criterion is on the quality of the teachers’ conceptual links made in

39
their explanations when discussing the solution to a mathematical problem through
engaging with learner test data (see Appendix 7).

Error Text Criteria


The next four criteria were used to code the groups’ explanations of the distractor. These
explanations are about teachers’ engagement with the errors. Four criteria were selected:
“awareness of error”, “diagnostic reasoning”, “use of everyday knowledge” and
“multiple explanations”. Every “error answer text” was coded according to these four
criteria.

Awareness of error
The emphasis in this criterion is on the teachers’ explanations of the actual mathematical
error (and not on the learners’ reasoning). The emphasis in the criterion is on the
mathematical quality of the teachers’ explanations of the actual mathematical error
when discussing the solution to a mathematical problem (see Appendix 8).

Diagnostic Reasoning
The idea of error analysis goes beyond identifying the actual mathematical error. The
idea is to understand how teachers go beyond the mathematical error and follow the
way learners were reasoning when they made the error. The emphasis in the criterion is
on the quality of the teachers’ attempt to provide a rationale for how learners were
reasoning mathematically when they chose a distractor (see Appendix 9).

Use of the everyday


Teachers often explain why learners make mathematical errors by appealing to
everyday experiences that learners draw on and may confuse with the mathematical
context of the question. The emphasis in this criterion is on the quality of the use of
everyday knowledge, judged by the links made to the mathematical understanding that
the teachers attempt to advance (see Appendix 10).

Multiple explanations
One of the challenges in the teaching of mathematics is that learners need to hear more
than one explanation of the error. This is because some explanations are more accurate
or more accessible than others and errors need to be explained in different ways for
different learners. This criterion examines the teachers’ ability to offer alternative
explanations of the error when they are engaging with learners’ errors through analysis
of learner test data (see Appendix 11).

40
3.4 Training of coders and coding process
Coders started by coding one full set of Grade 3 texts, after an initial discussion of the
coding criteria and template. The evaluation analysis team discussed the assigned Grade
3 codes with the coders. After this and more general discussions about the format and
the criterion, the final format of the coding template was agreed on. This format was to
include all texts pasted into the template, so that there could be no confusion as to which
part of the teachers’ explanations the coders were referring when they allocated codes.

Discussion meetings with the evaluation analysis team were held after each set of coding
(per grade) was completed. The discussions allowed for refinement of the wording of
the criteria – and when refinements were made, coders reviewed their completed grades
in order to be sure that all coding was in line with the criteria.

Coding continued per grade, with discussion after the completion of each grade for
Grades 4, 5 and 6. After each full set of codes for a grade was completed by both coders,
the percentages of alignment between coders was calculated and sent to coders. The
alignment differences were used to guide the above discussions and the evaluation
coordinator gave comments as to which areas the coders should focus on and think
more carefully about in order to improve on alignment. After the fourth set of codes was
completed and discussed it was decided that there was sufficient agreement between the
two coders for them to continue with the remaining sets of codes and the process set was
in motion. All remaining templates were prepared and emailed to coders.

3.5 Validity and reliability check


The full set of codes was completed and then reviewed by the evaluation analysis team.
Consensus discussions between the coders were held on certain items in order to hone
agreement between them. The final set of codes used in the analysis was agreed on in
discussion with and through arbitration by a third expert (a member of the evaluation
analysis team). The alignment between coders was 57% before the review, 71% after the
review and 100% after the arbitration.

3.6 Data analysis


The coded data were analysed quantitatively, finding observable trends and
relationships evident in the sample. The content of the templates was entered into excel
spread sheets to facilitate the overall analysis of the findings and comparison of the
various criteria. Data were summarised for descriptive analysis and t-tests were done to

41
establish whether there were any significant changes between the error analysis in
Rounds 1 and 2. An unpaired t-test assuming an unequal variance (since the sample
sizes were different) was used to test for significance. This was considered to be the most
conservative measure and appropriate for use for our data set. Correlations between
codes for the answer texts (procedural and conceptual explanations) and error texts
(awareness of error and diagnostic reasoning) were calculated using Pearson’s r
coefficient. The analysis follows three levels:

Level one: For each of the six criteria, we grouped the data into two sets of grouped-
grades (Grade 3-6 and Grade 7-9). This enabled us to discern patterns in performance
across the 14 small groups, between primary and secondary teachers (see Section four).
For each of the six criteria we provide the following:
 Graphs and tables comparing the findings for two sets of grouped grades (Grade 3-6
and Grade 7-9 groups) between Rounds 1 and 2.
 Graphical representations of the overall findings across all groups.
 Graphs representing the findings for mathematical content areas.

Level two: We generate broad findings across two or more criteria and about the total 14
groups analysed (see Section five). These findings enabled us to see patterns in teacher
knowledge of error analysis, divided along subject matter and pedagogical content
knowledge. This type of analysis also enabled us to compare the overall performance
across the criteria between the two sets of grouped grades (Grade 3-6 and Grade 7-9).
Lastly the analysis enables us to show the importance of guided instruction by group
leaders, for a successful process of accountability conversations, albeit with some
differentiation across the criteria. With these findings we are able to then construct a
broad description of what is involved in teachers “diagnostic judgment”.

Level three: We draw on the finding to generate a broad description (working towards a
conceptual definition) of what we mean by “diagnostic judgment” in teachers reasoning
about error (see Section five).

42
Section Four: Analysis of data

43
4.1 An overview of the comparison of Round 1 and Round 2
An overview of the comparison of Round 1 and Round 2 shows immediately that the
groups worked to a higher level in Round 1 when they had leaders. This overall
impression of quality differences between Rounds 1 and 2 shows a drop in the
percentage of texts demonstrating “full” explanations on every criterion between Round
1 and 2 and an increase in the percentage of texts demonstrating “not present”
explanations on every criterion between Round 1 and 2.

Figure 1: Round 1 – overall codes assigned on all six criteria

Round 1 Number Spread of Codes


100
90
80
70 Not present
60 Inaccurate
Partial
50 Full
40
30
20
10
0
Procedural Conceptual Awareness Diagnostic Everyday Multiple

Figure 2: Round 2 – overall codes assigned on all six criteria

Round 2 Overall Spread of codes


100
90
80
70
Not present
60 Inaccurate
50 Partial
Full
40
30
20
10
0
Procedural Conceptual Awareness Diagnostic Everyday Multiple
Subject Matter Knowledge Pedagogical Content Knowledge

44
The correlation between procedural explanations and conceptual explanations was high
in both rounds, although it decreased in Round 2 (r = 0,74 in Round 1 and r = 0,66 in
Round 2). The correlation between awareness of the mathematical error and diagnostic
reasoning was also high and increased in Round 2 (r = 0,651 in Round 1 and r = 0,71 in
Round 2)26.

The downwards trend and the increases and decreases in texts demonstrating different
levels of the criteria will now be analysed individually for each of the criteria.

4.2. Analysis of individual criteria


We now discuss the criteria separately, focusing on grouped grades (3-6 and 7-9) and
mathematical content for each of the six criteria. We first note our observation separately
for each of the grouped grades, and then for the all of the 14 groups together.

4.2.1 Procedural explanations

Figure 3: Procedural explanations of answers – Round 1 and 2 by grouped grades

Procedural explanations Round 1 and Round 2 by grade


100%

80%

60%

Grade 3-6
40%
Grade 7-9

20%

0%
Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Procedural explana- Procedural explana- Procedural explana- Procedural explana-


tion not given tion inaccurate or tion correct but tion accurate and
incomplete missing some steps full

26
Since these correlations were high, exemplars of explanations which received the same level coding are given in
Appendices 6, 7, 8 and 9, in addition to other exemplar texts.
45
Analysis of procedural explanations in Grade 3-6 small groups’ texts:

Table 7: Grades 3-6 procedural explanations demonstrated in teacher text explanations

Change
Round Round Strength of
Procedural Explanations between Round 1 Round 2
1 2 explanation
rounds

Procedural explanation
3% 17% 14%***
not given
Weaker 11% 19%
Procedural explanation
8% 2% -6%
inaccurate or incomplete

Procedural explanation
correct but missing some 46% 61% 15%*
steps Stronger 89% 81%
Procedural explanation
43% 20% -23%***
accurate and full

*** Difference significant at a 99% level of confidence


* Difference significant at a 90% level of confidence

Observations about groups’ procedural explanations in relation to correct answers in


Rounds 1 and 2 for the grade 3-6 groups:
1. The number of texts of the Grade 3-6 group that demonstrate weak procedural
explanations increased slightly between Rounds 1 and 2 (from 11% in Round 1 to
19% in Round 2).
2. The number of texts where no attempt was made to give procedural explanations
increased by 14% from 3% in Round 1 to 17% of texts in Round 2.
3. The number of texts where procedural explanations were inaccurate or incomplete
decreased by 6% (from 8% in Round 1 to 2% in Round 2).
4. The number of texts of the Grade 3-6 group that demonstrate strong procedural
explanations slightly decreased by 8% in Round 2 (from 89% in Round 1 to 81% in
Round 2).
5. The gap between incomplete and systematic explanations was not very high in
Round 1 but it grew much wider in Round 2 (from 3% in Round 1 to 41% in Round
2).

46
Analysis of procedural explanations in Grade 7-9 small groups’ texts:

Table 8: Grades 7-9 procedural explanations demonstrated in teacher text explanations

Change
Round Round Strength of
Procedural Explanations between Round 1 Round 2
1 2 explanation
rounds

Procedural explanation
3% 9% 6%
not given
Weaker 11% 19%
Procedural explanation
8% 10% 2%
inaccurate or incomplete

Procedural explanation
correct but missing some 43% 56% 13%**
steps Stronger 89% 81%27
Procedural explanation
46% 26% 20%***
accurate and full

*** Difference significant at a 99% level of confidence


** Difference significant at a 95% level of confidence

Observations about groups’ procedural explanations in relation to correct answers in


Rounds 1 and 2 for the grade 7-9 groups:
1. The number of texts of the grade 7-9 group that demonstrate weak procedural
explanations slightly increased (from 11% in Round 1 to 19% in Round 2).
2. The number of texts where no attempt was made to give procedural explanations
increased by 6% (from 3% in Round 1 to 9% in Round 2).
3. The number of texts where procedural explanations were inaccurate or incomplete
increased by 2% (from 8% in Round 1 to 10% in Round 2).
4. The number of texts of the Grade 7-9 group that demonstrate strong procedural
explanations decreased by 7% in Round 2 (from 89% in Round 1 to 82% in Round 2).
5. The gap between incomplete and systematic explanations was not very high in
Round 1 (3%) but grew wider in Round 2 (31%).

27
Rounding of Round 1 and 2 percentages is done individually but Round totals are added and then rounded hence there
may be small discrepancies such as the one highlighted in this table. This occurs in certain table in this report but is only
noted here, the first time such a discrepancy arises.
47
Overall findings on the procedural explanations of the correct answer – Rounds 1 and
2
Figure 4: Procedural explanations of answers – Round 1 and 2

Procedural explanations - Overall - Round 1 and 2


100%

80%

60% 56%
44% 46%
40%
23%
20% 16%
8% 5%
3%
0%
Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Round 1

Round 2
Procedural explanation Procedural explanation Procedural explanation Procedural explanation
not given inaccurate or incom- correct but missing accurate and full
plete some steps

Table 9: Procedural explanations demonstrated in teacher test explanations

Change
Round Round Strength of
Procedural Explanations between Round 1 Round 2
1 2 explanation
rounds

Procedural explanation
3%* 16%* 13%
not given
Weaker 11% 20%
Procedural explanation
8% 5% -3%
inaccurate or incomplete

Procedural explanation
correct but missing some 44% 57% 13%***
steps Stronger 89% 80%
Procedural explanation
46% 23% -16%***
accurate and full

*** Difference significant at a 99% level of confidence

48
Observations about groups’ procedural explanations in relation to correct answers in
Rounds 1 and 2:
1. The overall number of texts that demonstrate weak procedural explanations
increased slightly between Rounds 1 and 2 (from 11% in Round 1 to 20% in Round
2).
2. The number of texts where no attempt was made to give procedural explanations
increased by 13% from 3% in Round 1 to 16% of texts in Round 2. This increase was
significant at 99%.
3. The number of texts where procedural explanations were inaccurate or incomplete
decreased by 3% (from 8% in Round 1 to 5% in Round 2). This decrease was not
significant.
4. The overall number of texts that demonstrate strong procedural explanations slightly
decreased by 9% in Round 2 (from 89% in Round 1 to 80% in Round 2).
5. The gap between incomplete and systematic explanations was not very high in
Round 1 but it grew much wider in Round 2 (from 2% in Round 1 to 34% in Round
2).
6. The decrease between Round 1 and Round 2 in the number of texts with full or
partially complete explanations was significant at 99%.

Procedural explanations, by mathematical content


Number was the content area in which the groups’ procedural explanations of the
correct answer were strongest. Data was the content area in which the groups’
procedural explanations of the correct answer were the weakest. The graphs below
represent the percentages of procedural explanations for these two content areas across
the two rounds.
Figure 5: Round 1 and 2 procedural explanations of answers – content area number

49
Number Procedural explanations Rounds 1 and 2
95
85
75
65
55
Percentage of texts

45
35
25
15
5
Not present Inaccurate Partial Full
Round 1 2.27272727272727 7.95454545454545 29.5454545454545 60.2272727272727

Round 2 10.9375 1.5625 51.5625 35.9375

Change in the strong content area:


 The number of texts that demonstrate full and accurate procedural explanations
decreased from 60.23% in Round 1 to 35.94% in Round 2.
 The number of texts that demonstrate partial procedural explanations (correct
but, missing some steps) increased from 29.55% in Round 1 to 51.56% in Round
2.
 The number of texts in which procedural explanations were inaccurate or
incomplete decreased from 7.95% to 1.56% in Round 2.
 The number of texts where procedural explanations were not given increased
from 2.27% in Round 1 to 10.94% in Round 2.

Figure 6: Round 1 and 2 procedural explanations of answers – content area data

50
Data Procedural explanations Rounds 1 and 2
95
85
75
65
55
Percentage of texts

45
35
25
15
5
Not present Inaccurate Partial Full
Round 1 5.88235294117647 1.96078431372549 47.0588235294118 45.0980392156863

Round 2 25 12.5 56.25 6.25

Change in the weak content area:


 The number of texts that demonstrate full and accurate procedural explanations
decreased from 45.10% in Round 1 to 6.25% in Round 2.
 The number of texts that demonstrate partial procedural explanations (correct
but, missing some steps) increased from 47.06% in Round 1 to 56.25% in Round
2.
 The number of texts that procedural explanation inaccurate or incomplete
increased from 1.96% to 12.5% in Round 2.
 The number of texts that did not include procedural explanations increased from
5.88% in Round 1 to 25% in Round 2.

In comparison, the trends between the two rounds in these two content areas are similar,
bar the decrease in Round 2 in the number of texts with inaccurate explanation in the
number content area.

4.2.2 Conceptual explanations


Figure 7: Conceptual explanations in explanations – Round 1 and 2 by grouped grades

51
Conceptual Explanations - Round 1 and Round 2 by grade
100%
90%
80%
70%
60%
50%
40% Grade 3-6
30% Grade 7-9
20%
10%
0%
Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Round 1

Round 2
No conceptual links Explanation includes Explanation includes Explanation includes
in explanation poorly conceived some but not all conceptual links that
conceptual links conceptual links explain process and
background

Analysis of conceptual explanations in Grade 3-6 small groups’ texts:

Table 10: Grades 3-6 conceptual explanations demonstrated in teacher text explanations

Change
Round Roun Strength of Round
Conceptual explanations between Round 2
1 d2 explanation 1
rounds

No conceptual links in
4% 30% 26%*
explanation

Explanation includes Weaker 23% 40%


poorly conceived 19% 10% -9%
conceptual links

Explanation includes
some but not all 39% 49% 10%
conceptual links

Explanation includes Stronger 77% 60%


conceptual links that
38% 11% -27%**
explain process and
background

*** Difference significant at a 99% level of confidence


* Difference significant at a 90% level of confidence

Observations about groups’ conceptual explanations in relation to correct answers in


Rounds 1 and 2 for the grade 3-6 groups:

52
1. The number of texts of the Grade 3-6 group that demonstrate weak conceptual
explanations increased by 17% between Rounds 1 and 2 (from 23% in Round 1 to
40% in Round 2).
2. The number of texts where no conceptual links were evident increased by 26% from
4% in Round 1 to 30% of texts in Round 2.
3. The number of texts where poorly conceived conceptual links were made decreased
by 9% (from 19% in Round 1 to 10% in Round 2).
4. The number of texts of the Grade 3-6 group that demonstrate strong conceptual
explanations decreased by 17% in Round 2 (from 77% in Round 1 to 60% in Round
2).
5. The gap between explanations that include conceptual links that explain the
background and process of the answer and explanations that include some but not
all of the conceptual links grew wider in Round 2 (from 1% in Round 1 to 39% in
round 2).

Analysis of conceptual explanations in Grade 7-9 small groups’ texts:

Table 11: Grades 7-9 conceptual explanations demonstrated in teacher text explanations

Change
Round Round Strength of
Conceptual explanations between Round 1 Round 2
1 2 explanation
rounds

No conceptual links in
6% 33% 27%***
explanation

Explanation includes Weaker 21% 37%


poorly conceived 15% 4% -11%**
conceptual links

Explanation includes
some but not all 40% 34% -6%
conceptual links

Explanation includes Stronger 79% 63%


conceptual links that
38% 29% -9%
explain process and
background

*** Difference significant at a 99% level of confidence


** Difference significant at a 95% level of confidence
Observations about groups’ conceptual explanations in relation to correct answers in
Rounds 1 and 2 for the grade 7-9 groups:
1. The number of texts of the Grade 7-9 group that demonstrate weak conceptual
explanations increased (from 21% in Round 1 to 37% in Round 2).

53
2. The number of texts where no conceptual links were evident increased by 27% from
6% to 33% of texts.
3. The number of texts where poorly conceived conceptual links were made decreased
by 9% (from 15% in Round 1 to 4% in Round 2).
4. The number of texts of the Grade 7-9 group that demonstrate strong conceptual
explanations decreased by 15% in Round 2 (from 78% in Round 1 to 63% in Round
2).
5. The gap between explanations that include conceptual links that explain the
background and process of the answer and explanations that include some but not
all of the conceptual links was not at all high in Round 1 (2%) and 5% in Round 2.

Overall findings on the conceptual explanations of the correct answer – Rounds 1 and
2
Figure 8: Conceptual links in explanations of answers -Round 1 and 2

Conceptual explanations - Overall - Round 1 and 2


100%

80%

60%
45%
39% 40%
40%
29%
20% 17% 16%
10%
5%
0%
Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

No conceptual links in Explanation includes Explanation includes Explanation includes


explanation poorly conceived some but not all concep- conceptual links that
conceptual links tual links explain process and
background

54
Table 12: Conceptual explanations demonstrated in teacher test explanations

Change
Round Round Strength of Round
Conceptual explanations between Round 1
1 2 explanation 2
rounds

No conceptual links in
5% 29% 24%***
explanation

Explanation includes Weaker 22% 39%


poorly conceived 17% 10% 7%**
conceptual links

Explanation includes some


but not all conceptual 39% 45% 6%
links

Explanation includes Stronger 78% 61%


conceptual links that
40% 16% -14%***
explain process and
background

*** Difference significant at a 99% level of confidence


** Difference significant at a 95% level of confidence

Observations about groups’ conceptual explanations in relation to correct answers in


Rounds 1 and 2:
1. The overall number of texts that demonstrate weak conceptual explanations
increased between Rounds 1 and 2 (from 22% in Round 1 to 39% in Round 2).
2. The number of texts where no attempt was made to give procedural explanations
increased by 14% from 5% in Round 1 to 29% of texts in Round 2. This increase was
significant at 99%.
3. The number of texts where conceptual explanations included poor conceptual links
decreased by 7% (from 17% in Round 1 to 10% in Round 2). This decrease was
significant at 95%.
4. The overall number of texts that demonstrate strong conceptual explanations
decreased by 17% in Round 2 (from 78% in Round 1 to 61% in Round 2).
5. The gap between explanations that include some but not all conceptual links and
conceptual explanations with links that explain the process and background was 9%
in Round 1 but it grew much wider to 29% in Round 2.
6. The decrease between Round 1 and Round 2 in the number of texts with conceptual
explanations with links that explain the process and background was significant at
99%.

55
Conceptual explanations, by mathematical content
Number was the content area in which the groups’ conceptual explanations of the
correct answer are the strongest. Algebra is the content area in which the groups’
conceptual explanations of the correct answer were the weakest. The graphs below
represent the percentages of conceptual explanations for these two content areas across
the two rounds.

Figure 9: Round 1 and 2 conceptual explanations of answers – content area number

Number Conceptual explanations Rounds 1 and 2


95
85
75
65
55
Percentage of texts

45
35
25
15
5
Not present Inaccurate Partial Full
Round 1 3.40909090909091 15.9090909090909 32.9545454545455 47.7272727272727

Round 2 25 9.375 40.625 25

Change in the strong content area:

 The number of texts that include conceptual links that explain process and
background decreased from 47.73% in Round 1 to 25% in Round 2.
 The number of texts that include some but not all conceptual links increased
from 32.95% in Round 1 to 40% in Round 2.
 The number of texts that include poorly conceived conceptual links decreased
from 15.91% in Round 1 to 9.38% in Round 2.
 The number of texts that include no conceptual links in explanation increased
from 3.41 in Round 1 to 25% in Round 2.

56
Figure 10: Round 1 and 2 conceptual explanations of answers – content area algebra

Alegbra Conceptual explanations Rounds 1 and 2


95
85
75
65
55
45
Percentage of texts

35
25
15
5
Not present Inaccurate Partial Full
Round 1 11.3636363636364 27.2727272727273 36.3636363636364 25

Round 2 41.6666666666667 4.16666666666667 45.8333333333333 8.33333333333333

Change in the weak area:

 The number of texts that include conceptual links that explain process and
background decreased from 25%% in Round 1 to 8.33% in Round 2.
 The number of texts that include some but not all conceptual links increased
from 36.36% in Round 1 to 45.83% in Round 2.
 The number of texts that include poorly conceived conceptual links decreased
from 27.27% in round 1 to 4.17% in round 2.
 The number of texts that include no conceptual links in explanation increased
from 11.36 in Round 1 to 41.67% in round 2.

In comparison, the trends between the two rounds in these two content areas are similar.
Noted is the far higher increase of texts that include no conceptual links in explanation
in the weaker area.

57
4.2.3 Awareness of Mathematical Error

Figure 11: Awareness of error in explanations – Round 1 and 2 by grouped grades

Awareness of Error - Round 1 and Round 2 by grade


100%
90%
80%
70%
60%
50%
40% Grade 3-6
30% Grade 7-9
20%
10%
0%
Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Round 1

Round 2
Mathematical ex- Mathematical ex- Mathematical ex- Mathematical ex-
planation of error planation of error is planation empha- planation empha-
not present flawed or incom- sises the procedural sises the conceptual
plete

Analysis of awareness of error in Grade 3-6 small groups’ texts:


The information in this graph is presented in the table below, classifying groups’
explanation texts according to the above four criteria:

58
Table 13: Grades 3-6 awareness of mathematical error demonstrated in teacher text
explanations

Change
Level of mathematical Round Round Strength of
between Round 1 Round 2
awareness 1 2 explanation
rounds

In these texts a
mathematical explanation
18% 21% 3%
of the particular error is
not included.

In these texts the Weaker 30% 25%


explanation of the
particular error is
12% 4% -8%*
mathematically inaccurate
or incomplete and hence
potentially confusing.

In these texts the


explanation of the
particular error is
mathematically sound but
39%* 53%* 14%**
does not link to common
misconceptions or errors.
The explanation is
predominantly procedural Stronger 69% 75%

In these texts the


explanation of the
particular error is
30% 21% 9%
mathematically sound and
suggests links to common
misconceptions or errors.

** Difference significant at a 95% level of confidence


* Difference significant at a 90% level of confidence

Observations about groups’ awareness of the mathematical error in Rounds 1 and 2 for
the grade 3-6 groups:
1. The number of texts of the Grade 3-6 group that demonstrate weak mathematical
awareness decreased in Round 2 (From 30% in Round 1 to 25% in Round 2).
2. The number of texts with mathematical flaws and or that are potentially confusing
was reduced by 8%, from 12% to 4%.
3. Group 3-6 demonstrates a slight increase in the number of texts that demonstrate
strong awareness (from 69% to 75%).

59
4. For the texts in Round 2 that demonstrate strong mathematical awareness, the gap
between texts where explanation of the error is predominantly procedural and texts
where explanation of error links to common misconceptions of errors grew wider,
from 9% in Round 1 to 32% in Round 2.

Analysis of Grade 7-9 small groups’ texts:

Table 14: Grades 7-9 awareness of mathematical error demonstrated in teacher text
explanations

Change
Level of mathematical Round Round Strength of
between Round 1 Round 2
awareness 1 2 explanation
rounds

In these texts a
mathematical explanation
15% 22% 7%
of the particular error is
not included.

In these texts the Weaker 39% 37%


explanation of the
particular error is
24% 15% -9%
mathematically inaccurate
or incomplete and hence
potentially confusing.

In these texts the


explanation of the
particular error is
mathematically sound but
35% 38% 3%
does not link to common
misconceptions or errors.
The explanation is
predominantly procedural. Stronger 61% 63%

In these texts the


explanation of the
particular error is
26% 25% -1%
mathematically sound and
suggests links to common
misconceptions or errors.

Observations about groups’ awareness of the mathematical error in Rounds 1 and 2 for
the grade 7-9 groups:

60
1. The number of texts of the Grade 7-9 group that demonstrate weak mathematical
awareness remained the same in Round 2 (39% in Round 1 and 37% in Round 2).
2. The number of texts with mathematical flaws or that are potentially confusing was
reduced in Round 2 by 9% (from 24% in Round 1 to 15%, in Round 2).
3. The number of general texts increased (from 15% in Round 1 to 22% in Round 2).
4. The number of texts of the Grade 7-9 group that demonstrate strong mathematical
awareness increased very slightly in Round 2 (from 61% in Round 1 to 63% in Round
2).
5. The gap between texts where explanation of the error is predominantly procedural
and texts where explanation of the error links to common misconceptions or errors
also only slightly grew wider in Round 2 (from 9% in Round 1 to 13% in Round 2).

Overall findings on the awareness of the mathematical error – Rounds 1 and 2

Figure 12: Awareness of error in explanations – Round 1 and 2

Awareness of error - Overall - Round 1 and 2


100%
80%
60% 50%
40% 36%
22% 29% 22%
18% 17%
20% 6%
0%
Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Round 1

Round 2
Mathematical explana- Mathematical explana- Mathematical explana- Mathematical explana-
tion of error not tion of error is flawed tion emphasises the tion emphasises the
present or incomplete procedural conceptual

61
Table 15: Awareness of the mathematical error demonstrated in teacher test explanations

Change
Level of mathematical Round Round Strength of Round Round
between
awareness 1 2 explanation 1 2
rounds

In these texts a
mathematical explanation
18% 22% 4%
of the particular error is
not included.

In these texts the Weaker 36% 28%


explanation of the
particular error is
17% 6% -11%***
mathematically inaccurate
or incomplete and hence
potentially confusing.

In these texts the


explanation of the
particular error is
mathematically sound but
36% 50% 14%***
does not link to common
misconceptions or errors.
The explanation is
predominantly procedural. Stronger 64% 72%

In these texts the


explanation of the
particular error is
29% 22% -7%
mathematically sound and
suggests links to common
misconceptions or errors.

*** Difference significant at a 99% level of confidence

Observations about groups’ awareness of the mathematical error in Rounds 1 and 2:


1. The overall number of texts that demonstrate weak awareness of error decreased
between Rounds 1 and 2 (from 36% in Round 1 to 28% in Round 2).
2. The number of texts where no mathematical awareness of the error was evident
increased by 4% from 18% in Round 1 to 22% of texts in Round 2. This increase was
not significant.
3. The number of texts where explanations were mathematically inaccurate decreased
by 11% (from 17% in Round 1 to 6% in Round 2). This decrease was significant at
99%.

62
4. The overall number of texts that demonstrate strong awareness of the mathematical
error increased by 8% in Round 2 (from 64% in Round 1 to 72% in Round 2).
5. The gap between explanations that show more procedural than conceptual
awareness of the error was 9% in Round 1 but it grew much wider to 28% in Round
2.
6. The increase between Round 1 and Round 2 in the number of texts show more
procedural awareness of the error was significant at 99%.

Awareness of error, by mathematical content


Number was the content area in which the groups’ awareness of error in their
explanations of the choice of incorrect answer was the strongest. Shape was the content
area in which the groups’ awareness of error in their explanations of the choice of
incorrect answer was the weakest. The graphs below represent the percentages of
explanations demonstrating awareness of error for these two content areas across the
two rounds.
Figure 13: Round 1 and 2 awareness of error – content area number

Number Awarenss of error Rounds 1 and 2


95
85
75
65
55
45
Percentage of texts

35
25
15
5
Not present Inaccurate Partial Full
Round 1 20.8955223880597 11.9402985074627 28.3582089552239 38.8059701492537

Round 2 18.1818181818182 3.03030303030303 54.5454545454545 24.2424242424242

Change in the strong content area:


 The number of texts that demonstrate awareness of concept decreased from
38.81% in Round 1 to 24.24% in Round 2.
 The number of texts that demonstrate awareness of procedure increased from
28.36% in Round 1 to 54.55% in Round 2.

63
 The number of texts that have mathematical flaws decreased from 11.94% in
Round 1 to 3.03% in Round 2.
 The number of texts that includes no mathematical awareness decreased from
20.90 in Round 1 to 18.18% in Round 2.

Figure 14: Round 1 and 2 awareness of error – content area shape

Shape Awarenss of error Rounds 1 and 2


95
85
75
65
55
45
Percentage of texts

35
25
15
5
Not present Inaccurate Partial Full
Round 1 17.8571428571429 26.7857142857143 41.0714285714286 14.2857142857143

Round 2 22.2222222222222 5.55555555555556 61.1111111111111 11.1111111111111

Change in the weak content area:

 The number of texts that demonstrate awareness of concept decreased from


14.29% in Round 1 to 11.11% in Round 2.
 The number of texts that demonstrate awareness of procedure increased from
41.07% in Round 1 to 61.11% in Round 2.
 The number of texts that have mathematical flaws decreased from 26.79% in
Round 1 to 5.56% in Round 2.
 The number of texts that includes no mathematical awareness increased from
17.86% in Round 1 to 22.22% in round 2.

64
4.2.4 Diagnostic reasoning

Figure 15: Diagnostic reasoning in explanations – Round 1 and 2 by grouped grades

Diagnostic Reasoning - Round 1 and Round 2 by grade


100%
90%
80%
70%
60%
50%
40% Grade 3-6
30% Grade 7-9
20%
10%
0%
Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Round 1

Round 2
No attempt to ex- Description of Description of Description of
plain learner reason- learner reasoning learner reasoning learner reasoning is
ing does not hone in on hones in on error systematic and
error but is incomplete hones in on error

Analysis of diagnostic reasoning in Grade 3-6 small groups’ texts:

Table 16: Grades 3-6 diagnostic reasoning demonstrated in teacher text explanations

Change
Round Round Strength of Round
Diagnostic awareness between Round 1
1 2 explanation 2
rounds

In these texts, no attempt


is seen to describe
learners’ mathematical 24% 29% 5%
reasoning behind the
particular error
Weaker 55% 58%
In these texts the
description of the learners’
mathematical reasoning 31% 29% -2%
does not hone in on the
particular error.

In these texts the 35% 30% -5% Stronger 45% 42%


description of the learners’
65
mathematical reasoning is
incomplete although it
does hone in on the
particular error.

In these texts the


description of the steps of
learners’ mathematical
10% 12% 2%
reasoning is systematic
and hones in on the
particular error.

Observations about groups’ diagnostic reasoning in relation to the mathematical error


in Rounds 1 and 2 for the grade 3-6 groups:
1. The number of texts of the Grade 3-6 group that demonstrate weak diagnostic
reasoning increased slightly between Rounds 1 and 2 (from 55% in Round 1 to 58%
in Round 2).
2. The number of texts where no attempt was made to explain learner reasoning behind
the error increased by 4% from 24% in Round 1 to 29% of texts in Round 2.
3. The number of texts where learner reasoning was described but did not hone in on
the error slightly decreased by 3% (from 31% in Round 1 to 29% in Round 2).
4. The number of texts of the Grade 3-6 group that demonstrate strong diagnostic
reasoning slightly decreased by 3% in Round 2 (from 45% in Round 1 to 42% in
Round 2).
5. The gap between incomplete and systematic explanations was high but decreased
Round 2 (from 25% in Round 1 to 18% in Round 2).

Analysis of diagnostic reasoning in Grade 7-9 small groups’ texts:

Table 17: Grades 7-9 diagnostic reasoning demonstrated in teacher text explanations

Change
Round Round Strength of Round Round
Diagnostic awareness between
1 2 explanation 1 2
rounds

In these texts, no attempt Weaker 59% 37%


is seen to describe
learners’ mathematical 23% 25% 2%
reasoning behind the
particular error

In these texts the 36% 12% -24%***


description of the learners’

66
mathematical reasoning
does not hone in on the
particular error

In these texts the


description of the learners’
mathematical reasoning is
25% 53% 28%***
incomplete although it
does hone in on the
particular error
Stronger 41% 64%
In these texts the
description of the steps of
learners’ mathematical
16% 11% -5%
reasoning is systematic
and hones in on the
particular error.

*** Difference significant at a 99% level of confidence

Observations about groups’ diagnostic reasoning in relation to the mathematical error


in Rounds 1 and 2 for the grade 7-9 groups:
1. There was a big decrease in Round 2 in the number of texts of the Grade 7-9 group
that demonstrate weak diagnostic reasoning (from 59% in Round 1 to 37% in Round
2).
2. The number of texts where no attempt was made to explain learner reasoning
increased by 2% from 23% to 25% of texts.
3. The number of texts where teachers attempt to describe learner reasoning but the
description does not hone in on the error decreased by 24% (from 36% in Round 1 to
12% in Round 2).
4. The number of texts of the Grade 7-9 group that demonstrate strong diagnostic
reasoning increased by 23% in Round 2 (from 41% in Round 1 to 64% in Round 2).
5. The gap between incomplete and systematic explanations was not very high in
Round 1 (9%) but grew wider in Round 2 (41%).

67
Overall findings on the diagnostic reasoning in relation to the mathematical error –
Rounds 1 and 2
Figure 16: Diagnostic reasoning in relation to the error in explanations – Round 1 and 2

Diagnostic reasoning - Overall - Round 1 and 2


100%

80%

60%

40% 36%
32% 30%
26% 28%
24%
20% 12% 12%

0%
Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Round 1

Round 2
No attempt to explain Description of learner Description of learner Description of learner
learner reasoning reasoning does not reasoning hones in on reasoning is systematic
hone in on error error but is incomplete and hones in on error

Table 18: Diagnostic reasoning in relation to the error demonstrated in teacher test
explanations

Change
Round Round Strength of Round
Diagnostic awareness between Round 1
1 2 explanation 2
rounds

No attempt to explain
26% 28% 2%
learner reasoning

Description of learner Weaker 58% 52%


reasoning does not hone 32% 24% -12%
in on error

Description of learner
reasoning hones in on 30% 36% 6%
error but is incomplete
Stronger 42% 48%
Description of learner
reasoning is systematic 12% 12% 0%
and hones in on error

68
Observations about groups’ diagnostic reasoning in relation to the mathematical error
in Rounds 1 and 2:
1. The overall number of texts that demonstrate weak diagnostic reasoning in relation
to the error decreased slightly between Rounds 1 and 2 (from 58% to 52%).
2. The number of texts where there was no attempt to explain the learners’ errors was
evident increased by 2% from 26% in Round 1 to 28% of texts in Round 2. This
increase was not significant.
3. The number of texts where explanations did not hone in on the error decreased by
6% (from 32% in Round 1 to 24% in Round 2). This decrease was not significant.
4. The overall number of texts that demonstrate strong diagnostic reasoning in relation
to the error increased by 6% in Round 2 (from 42% in Round 1 to 48% in Round 2).
5. The gap between explanations that hone in on the error but are either incomplete or
complete was 18% in Round 1 but it grew to 24% in Round 2.
6. The number of texts with incomplete explanations but that do hone in on the error
increased from 30% in Round 1 to 36% in Round 2. This increase was not significant.
7. The number of texts with complete explanations that hone in on the error was 12% of
all texts and did not change between Rounds 1 and 2.

Diagnostic reasoning, by mathematical content


Measurement was the content area in which the groups’ diagnostic reasoning their
explanations of the choice of incorrect answer was the strongest. Shape was the content
area in which the groups’ diagnostic reasoning their explanations of the choice of
incorrect answer was the weakest. The graphs below represent the percentages of
explanations demonstrating diagnostic reasoning in the explanation of errors for these
two content areas across the two rounds.

69
Figure 17: Round 1 and 2 diagnostic reasoning – content area measurement

Measurement Diagnostic reasoning Rounds 1 and 2


95
85
75
65
55
45
Percentage of texts

35
25
15
5
Not present Inaccurate Partial Full
Round 1 25.9259259259259 24.0740740740741 35.1851851851852 14.8148148148148

Round 2 21.4285714285714 25 28.5714285714286 25

Change in the strong content area:


 The number of texts in which the description of learner reasoning is systematic
and hones in on error increased from 14.81% in Round 1 to 25% in Round 2.
 The number of texts in which the description of learner reasoning hones in on
error but is incomplete decreased from 35.19% in Round 1 to 28.57% in Round 2.
 The number of texts in which the description of learner reasoning does not hone
in on error increased from 15.93% in Round 1 to 21.43% in Round 2.
 The number of texts that include no attempt to explain learner reasoning
decreased from 25.93 in Round 1 to 21.43% in Round 2.

70
Figure 18: Round 1 and 2 diagnostic reasoning – content area shape

Shape Diagnostic reasoning Rounds 1 and 2


95
85
75
65
55
45
Percentage of texts

35
25
15
5
Not present Inaccurate Partial Full
Round 1 28.5714285714286 41.0714285714286 21.4285714285714 8.92857142857143

Round 2 33.3333333333333 27.7777777777778 38.8888888888889 0

Change in the weak content area:


 The number of texts in which the description of learner reasoning is systematic
and hones in on error decreased from 8.93% in Round 1 to 0% in Round 2.
 The number of texts in which the description of learner reasoning hones in on
error but is incomplete increased from 21.43% in Round 1 to 38.89% in Round 2.
 The number of texts in which the description of learner reasoning does not hone
in on error decreased from 41.07% in Round 1 to 27.78% in Round 2.
 The number of texts that include no attempt to explain learner reasoning
increased from 28.57% in Round 1 to 33.33% in Round 2.

71
4.2.5 Multiple explanations
In order to code teachers’ use of multiple explanations, all the items were first coded as
mathematically feasible or not. The coding on this criterion was per item since the code
(per item) was assigned based on the overall coding of all of the texts relating to each
item. Non-feasible explanations include explanations that are not mathematically
focused on the error under discussion (for example, ‘Simply a lack in conceptual
understanding of an odd number’) or are what we call “general texts” (for example, ‘We
could not ascertain the reasoning behind this distractor as there was no clear or obvious
reason why a learner would choose it’).The final code for multiple explanations was
assigned according to the number of feasible/non-feasible explanations given for texts
relating to each item.

Figure 19: Feasible and non-feasible explanations – Round 1 and 2

Feasible and not feasible explanations - Round 1 and Round


2 by grade
100%
90% 85% 86%
80% 78%
80%
70%
60% Grade 3-6
Grade 7-9
50%
40%
30%
20% 22%
20% 15% 14%
10%
0%
Round 1 Round 2 Round 1 Round 2
Feasible Not feasible

Observation
The differences between Rounds 1 and 2 show a small decrease in the number of feasible
explanations offered by the two groups (Grade 3-6 and Grade 7-9) in explanation of
error and a corresponding increase in non-feasible explanations. This finding points to
the value of the presence of an expert leader in the groups.

72
The findings reported below show that although the decrease in Round 2 in the number
of feasible explanations is small (5%) for the Grade 3-6, and slightly higher (8%) for the
Grade 7-9, teachers across the two groups provide mainly one mathematically
feasible/convincing explanation (with/without general explanation).

Figure 20: Multiple explanations in explanations – Round 1 and 2 by grouped grades

Multiple Explanations - Round 1 and Round 2 by grade


100%
90%
80%
70%
60%
50%
40% Grade 3-6
30% Grade 7-9
20%
10%
0%
Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Round 1

Round 2
No feasible math- One feasible math- Two feasible math- Two (or more) fea-
ematical explana- ematical explanation ematical explana- sible mathematical
tion with/without gen- tions combined with explanations
eral explanations general explanations

73
Analysis of multiple explanations in Grade 3-6 small groups’ texts:

Table 19: Grades 3-6 multiple explanations demonstrated in teacher text explanations

Change
Round Round Strength of Round
Multiple explanations between Round 1
1 2 explanation 2
rounds

In these items no
mathematically
4% 9% 5%
feasible/convincing
explanation is provided

In these items one Weaker 66% 83%


mathematically
feasible/convincing
62% 74% 8%
explanation is provided
(with/without general
explanations)

In these items, at least two


of the mathematical
explanations are
13% 7% -6%
feasible/convincing
(combined with general
explanations) Stronger 34% 17%

In these items, all of the


explanations (two or more)
21% 10% -11%
are mathematically
feasible/convincing

Observations about groups’ multiple explanations of mathematical errors in Rounds 1


and 2 for the grade 3-6 groups:
1. Grade 3-6 use of multiple explanations was low in both Rounds 1 and 2. 66% of the
items in Round 1 and 83% of the items in Round 2 demonstrate little use of multiple
explanations.
2. The number of items with no feasible mathematical explanation at all increased very
slightly (from 4% in Round 1 to 9% in Round 2).
3. The number of items with one feasible mathematical explanation was high and
increased by 11% in Round 2, (from 62% in Round 1 to 73%, in Round 2).
4. Multiple explanations (two or more) were not highly evident in Round 1 and even
less so in Round 2 in the grade 3-6 groups.
5. The number of items in the Grade 3-6 group that demonstrate use of multiple
explanations decreased by 17% in Round 2 (from 34% in Round 1 to 17% in Round
2). The decrease is evident in both types of stronger explanations (6% and 11% in the
respective types).
74
6. The Grade 3-6 groups provide mainly one mathematically feasible/convincing
explanation (with/without general explanations).

Analysis of multiple explanations in Grade 7-9 small groups’ texts:

Table 20: Grades 7-9 multiple explanations demonstrated in teacher text explanations

Change Strength of
Round Round Round
Multiple explanations between explanatio Round 1
1 2 2
rounds n

In these items no
mathematically
5% 17% 12%
feasible/convincing
explanation is provided

In these items one Weaker 53% 87%


mathematically
feasible/convincing
48% 70% 22%
explanation is provided
(with/without general
explanations)

In these items, at least two


of the mathematical
explanations are
8% 0% -8%
feasible/convincing
(combined with general
explanations) Stronger 46% 13%

In these items, all of the


explanations (two or more)
38% 13% -25%
are mathematically
feasible/convincing

Observations about groups’ multiple explanations of mathematical errors in Rounds 1


and 2 for the grade 7-9 groups:
1. Grade 7-9 use of multiple explanations was low in both Rounds 1 and 2. 53% of the
items in Round 1 and 87% of the items in Round 2 demonstrate little use of multiple
explanations.
2. The number of items with no feasible mathematical explanation at all increased by
12% (from 5% in Round 1 to 17% in Round 2).
3. The number of items with one feasible mathematical explanation radically increased
in Round 2 by 22% (from 48% in Round 1 to 70%, in Round 2).

75
4. Multiple explanations (two or more) were not highly evident in Round 1 and even
less so in Round 2 in the Grade 7-9 groups.
5. The number of items of the Grade 7-9 groups that demonstrate use of multiple
explanations radically decreased, by 30% in Round 2 (from 46% in Round 1 to 13% in
Round 2).
6. There was a difference of 30% between the percentage of items where two or more
feasible mathematical explanations were offered with and without general
explanations in Round 1 and a 13% gap between these multiple explanation
categories in Round 2. This means that there was a decrease in both types of multiple
explanations from Round 1 to Round 2.
7. There were 8% of items with two or more mathematical and general explanations
and 25% of items where two or more mathematical explanations were put forward.
In other words, Grades 7-9 did have 13% of items for which they offered two or more
feasible mathematical explanations without general explanations but all other
explanations they gave fell into the “weaker” categories.
8. The Grade 7-9 groups provide mainly one mathematically feasible/convincing
explanation (with/without general explanations).

76
Overall findings on the multiple explanations of the mathematical error – Rounds 1
and 2

Figure 21: Multiple explanations of the error in explanations – Round 1 and 2

Multiple Explanations- Overall - Round 1 and 2


100%

80% 74%

60% 56%

40%

20% 11% 11% 12% 11%


4% 4%
0%
Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Round 1

Round 2
No feasible mathemat- One feasible mathemat- Two feasible mathemat- Two (or more) feasible
ical explanation ical explanation with/ ical explanations com- mathematical explana-
without general expla- bined with general ex- tions
nations planations

77
Table 21: Multiple explanations of the error demonstrated in teacher test explanations

Change
Round Round Strength of Round Round
Multiple explanations between
1 2 explanation 1 2
rounds

No feasible mathematical
4% 11% 7%
explanation

One feasible Weaker 60% 85%


mathematical explanation
56% 74% 18%
with/without general
explanations

Two feasible
mathematical
11% 4% -7%
explanations combined
with general explanations Stronger 23% 15%
Two (or more) feasible
mathematical 12% 11% -1%
explanations

Observations about groups’ multiple explanations of the mathematical error in Rounds


1 and 2:
1. The use of multiple explanations was low in both Rounds 1 and 2.
2. 60% of the items in Round 1 and 85% of the items in Round 2 demonstrate little use
of multiple explanations.
3. The number of items with no feasible mathematical explanation at all increased by
7% (from 4% in Round 1 to 11% in Round 2).
4. The number of items with one feasible mathematical explanation increased in Round
2 by 18% (from 56% in Round 1 to 74% in Round 2).
5. There were very few multiple explanations (two or more) in Round 1 and even less
so in Round 2.
6. The number of items where any kind of multiple explanations of the error were
given decreased, by 8% in Round 2 (from 23% in Round 1 to 15% in Round 2).
7. However the decrease was not in the strongest category of multiple explanations
(which decreased from 12% in Round 1 to 11% in Round 2), it was in the category of
two feasible mathematical explanations combined with general explanations (which
decreased from 11% in Round 1 to 4% in Round 2).
8. All groups provide mainly one mathematically feasible/convincing explanation
(with/without general explanations).

Multiple explanations, by mathematical content

78
Shape was the content area in which the groups’ multiple explanations of the choice of
the incorrect answer were the strongest. Algebra was the content area in which the
groups’ multiple explanations of the choice of incorrect answer were the weakest. The
graphs below represent the percentages of explanations demonstrating multiple
explanations of the error for these two content areas across the two rounds.

Figure 22: Round 1 and 2 Multiple explanations – content area shape

Shape Multiple Explanations Rounds 1 and 2


95
85
75
65
55
45
Percentage of texts

35
25
15
5
Not present Inaccurate Partial Full
Round 1 3.44827586206897 58.6206896551724 10.3448275862069 27.5862068965517

Round 2 15.3846153846154 61.5384615384615 0 23.0769230769231

Change in the strong content area:


 The percentage of items for which all explanations (two or more) were
mathematically feasible/convincing decreased from 27.59% in Round 1 to 23% in
Round 2.
 The percentage of items for which at least two of the mathematical explanations
were feasible/convincing but which were combined with general explanations
decreased from 10.34% in Round 1 to 0% in Round 2.
 The percentage of items in which one mathematically feasible/convincing
explanation was provided (with/without general explanations) increased from
58.62% in Round 1 to 61.54% in Round 2.
 The percentage of items for which no mathematically feasible/convincing
explanation was provided increased from 3.45% in Round 1 to 15.38% in Round
2.

79
Figure 23: Round 1 and 2 Multiple explanations – content area algebra

Algebra Multiple Explanations Rounds 1 and 2


95
85
75
65
55
45
Percentage of texts

35
25
15
5
Not present Inaccurate Partial Full
Round 1 5.26315789473684 68.421052631579 10.5263157894737 15.7894736842105

Round 2 7.14285714285714 92.8571428571429 0 0

Change in the weak content area:


 The percentage of items for which all explanations (two or more) were
mathematically feasible/convincing decreased from 15.79% in Round 1 to 0% in
Round 2.
 The percentage of items for which at least two of the mathematical explanations
were feasible/convincing but which were combined with general explanations
decreased from 10.53% in Round 1 to 0% in Round 2.
 The percentage of items in which one mathematically feasible/convincing
explanation was provided (with/without general explanations) increased from
68.42% in Round 1 to 92.86% in Round 2.
 The percentage of items for which no mathematically feasible/convincing
explanation was provided increased from 5.26% in Round 1 to 7.14% in Round 2.

80
4.2.6 Use of the everyday in explanations of the error
“Use of the everyday” was considered in the analysis since the NCS Curriculum
emphasises linking the teaching of mathematics to everyday contexts or using everyday
contexts to enlighten learners about mathematical concepts. The ICAS tests included
predominantly contextualized questions (ranging from 98% in the Grade 3 test to 85% in
the Grade 9 test) and yet teachers made very few references to these contexts in their
explanations. In certain questions there may not have been contextualized explanations
that would have assisted in the explanation of the solution. The code “no discussion of
the everyday” includes all explanations which did not make reference to the everyday
whether it was appropriate or not. Further more detailed analysis of the kinds of
contextualized explanations which were offered and whether they were missing when
they should have been present could be carried out since it was not within the scope of
this report.

Figure 24: Use of the everyday in explanations of the error – Round 1 and 2 by grouped grades

Everyday Explanations - Round 1 and Round 2 by grade


100%
90%
80%
70%
60%
50%
40% Grade 3-6
30% Grade 7-9
20%
10%
0%
Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

No discussion of Everyday dominates Everyday link is rel- Everyday explana-


everyday and obscures the evant but not math- tion links clearly to
explanation ematically grounded the mathematcial
concept illustrated

Can see that Grades 3-6 make more reference to the everyday in their explanations, but
this is minimal.

81
Analysis of use of the everyday in Grade 3-6 small groups’ texts:

Table 22: Grades 3-6 use of the everyday demonstrated in teacher text explanations of the error

Change
Use of everyday in Round Round Strength of Round Round
between
explanations 1 2 explanation 1 2
rounds

In these texts no discussion


88% 97% 9%***
of everyday is evident

In these texts teachers’ use


of the ‘everyday’
dominates and obscures low 93% 98%
the mathematical 5% 1% -4%**
understanding, no link to
mathematical
understanding is made

In these texts teachers’ use


of the ‘everyday’ is
relevant but does not
5% 2% -3%*
properly explain the link to
mathematical
understanding

In these texts teachers’ use high 7% 2%


of the ‘everyday’ enables
mathematical
understanding by making 2% 0% -2%*
the link between the
everyday and the
mathematical clear

*** Difference significant at a 99% level of confidence


** Difference significant at a 95% level of confidence
* Difference significant at a 90% level of confidence

Observations about the quality of the use of error analysis between Round 1 and Round
2 for the grade 3-6 groups:
1. Grade 3-6 use of everyday explanations was extremely low in both Rounds 1 and 2.
2. In 88% of the texts in Round 1 and in 97% of the texts in Round 2, Grade 3-6 group
did not discuss every day contexts when they dealt with the errors.
3. Together with the texts where the use of everyday context was obscuring (5% in
Round 1 and 1% in round 2) the number of texts for Grade 3-6 groups in which
82
discussion of everyday contexts in the explanation of error is low increased by 5%
(from 93% in Round 1 to 98% in Round 2)
4. Grade 3-6 use of everyday explanations which was relevant but incomplete and or
enabled mathematical understating of the error was very low in both Rounds 1 and 2
(7% in Round 1 and 2% in Round 2).
5. The difference in use of everyday (relevant but incomplete and enabled
mathematical understating) was small and almost the same in the two rounds ( 3%
in Round 1 and 2 % in Round 2)

Analysis of use of the everyday in explanations in Grade 7-9 small groups’ texts:

Table 23: Grades 7-9 use of the everyday demonstrated in teacher text explanations of the error

Change
Use of everyday in Round Round Strength of Roun Round
between
explanations 1 2 explanation d1 2
rounds

In these texts no discussion


96%* 100%* 4%**
of everyday is evident

In these texts teachers’ use


of the ‘everyday’ dominates
and obscures the Weaker 97% 100%
mathematical 1% 0% -1%
understanding, no link to
mathematical
understanding is made

In these texts teachers’ use


of the ‘everyday’ is relevant
but does not properly
3% 0% -3%*
explain the link to
mathematical
understanding

In these texts teachers’ use Stronger 3% 0%


of the ‘everyday’ enables
mathematical
understanding by making 1% 0% -1%*
the link between the
everyday and the
mathematical clear

** Difference significant at a 95% level of confidence


* Difference significant at a 90% level of confidence

83
Observations about the quality of the use of error analysis between Round 1 and Round
2 for the grade 7-9 groups:
1. Grade 7-9 use of everyday explanations was extremely low in both Rounds 1 and 2.
2. In 96% of the texts in Round 1 and in 100% of the texts in Round 2, Grade 7-9 group
did not discuss every day contexts when they dealt with the errors.
3. In Round 2 texts with no discussion of everyday contexts increased by 9%.
4. Together with the texts where the use of everyday context was obscuring (1% in
Round 1 and 0% in round 2) the number of texts for Grade 7-9 groups in which
discussion of everyday contexts in the explanation of error is negligible.
5. Grade 7-9 use of everyday explanations which was relevant but incomplete and or
enabled mathematical understating of the error was very low in both Rounds 1 and 2
(3% in Round 1 and 0% in Round 2).
6. Only in Round there was a difference in use of everyday (relevant but incomplete
and enabled mathematical understating), which was negligible.

Overall findings on the use of the everyday in explanations of the mathematical error
– Rounds 1 and 2
Figure 25: Use of the everyday demonstrated in teacher explanations of the error – Round 1
and 2

84
Use of everyday in explanations- Overall - Round 1 and 2
98%
100% 91%

80%

60%

40%

20% 12%
4% 1% 4% 1% 0%
0%
Round 1

Round 2

Round 1

Round 2

Round 1

Round 2

Round 1

Round 2
No discussion of ev- Everyday dominates and Everyday link is relevant Everyday explanation
eryday obscures the explana- but not mathematically links clearly to the
tion grounded mathematcial concept il-
lustrated

Table 24: Use of the everyday in explanations of the error demonstrated in teacher test
explanations

Change
Use of everyday in Round Round Strength of Round Round
between
explanations 1 2 explanation 1 2
rounds

In these texts no discussion


91% 98% 7%
of everyday is evident

In these texts teachers’ use


of the ‘everyday’
dominates and obscures Weaker 94% 99%
the mathematical 4% 1% -3%**
understanding, no link to
mathematical
understanding is made

In these texts teachers’ use 4% 1% -3%* Stronger 16% 1%


of the ‘everyday’ is relevant
but does not properly
explain the link to
mathematical

85
understanding

In these texts teachers’ use


of the ‘everyday’ enables
mathematical
understanding by making 12% 0% -12%***
the link between the
everyday and the
mathematical clear

*** Difference significant at a 99% level of confidence


** Difference significant at a 95% level of confidence
* Difference significant at a 90% level of confidence

Observations about groups’ use of the everyday in explanations of the mathematical


error in Rounds 1 and 2:
1. The overall use of everyday in explanations of the error was extremely low in both
rounds 1 and 2.
2. In 91% of the texts in Round 1 and 98% of the texts, the groups did not discuss every
day related explanations of the error.
3. Only in 12% of the texts in Round 1 the groups used ‘everyday’ in ways that enables
mathematical understanding, by making the link between the everyday and the
mathematical clear

Use of the everyday, by mathematical content


There was very little reference to the everyday in mathematical explanations across all
content areas, but since such minimal reference was made in this section we highlight
the two areas in which at least some reference was made to the everyday in Rounds 1
and 2. The graphs below represent the percentages of explanations where some
reference to the everyday was demonstrated in the explanations of errors for these two
content areas across the two rounds.

Figure 26: Round 1 Use of everyday in explanations – content area number

86
Measurement use of the everyday Rounds 1 and 2
95
85
75
65
55
45
Percentage of texts

35
25
15
5
Not present Inaccurate Partial Full
Round 1 85.1851851851852 1.85185185185185 5.55555555555556 7.40740740740741

Round 2 96.4285714285714 3.57142857142857 0 0

Change in the area of measurement


 The number of texts that demonstrate groups’ use of the ‘everyday’ that enables
mathematical understanding by making the link between the everyday and the
mathematical clear, decreased from 7.41% in Round 1 to 0% in Round 2.
 The number of texts in which teachers’ use of the ‘everyday’ is relevant but does
not properly explain the link to mathematical understanding, decreased from
5.56% in Round 1 to 0% in Round 2.
 The number of texts in which teachers’ use of the ‘everyday’ dominates and
obscures the mathematical understanding, and where no link to mathematical
understanding is made, is negligible, decreasing from 1.85% in Round 1 to 0% in
Round 2.
 The number of texts where no discussion of everyday is evident increased from
85.19% in Round 1 to 96.43% in Round 2.

Figure 27: Round 1 Use of everyday in explanations – content area measurement

87
Number use of the everyday Rounds 1 and 2
95
85
75
65
55
Percentage of texts

45
35
25
15
5
Not present Inaccurate Partial Full
Round 1 92.5373134328358 4.47761194029851 2.98507462686567 0
Round 2 96.969696969697 0 3.03030303030303 0

Change in the area of number:


 In both rounds no text was found that demonstrates groups’ use of the
‘everyday’ that enables mathematical understanding by making the link between
the everyday and the mathematical clear.
 The number of texts in which teachers’ use of the ‘everyday’ is relevant but does
not properly explain the link to mathematical understanding, was negligible in
both rounds, increasing from 2.09% in Round 1 to 3% in Round 2.
 The number of texts in which teachers’ use of the ‘everyday’ dominates and
obscures the mathematical understanding, and where no link to mathematical
understanding is made, decreased from 4.48% in Round 1 to 0% in Round 2.
 The number of texts where no discussion of everyday is evident increased from
92.54% in Round 1 to 96.97% in Round 2.

4.2.7 Comparative strengths and weaknesses between the two sets of


grouped grades
The graphs below show the changes between rounds in the number of texts
demonstrating texts which do not have mathematical content “not present” and texts
which are mathematcially inaccurate (“inaccurate”) in the procedural, conceptual,
awareness and diganostic criteria. The first bar (blue) in each pair represents the Grade
3-6 group and the second bar (red) represents the Grade 7-9 group. Positive changes
represnet an increase between Round 1 and Round 2 while negative changes represent a
decrease between Round 1 and Round 2. The actual changes in percentages betweeen
rounds are given in the table below the graph.
88
Figure 28: Grade 3-6 and Grade 7-9 changes in “not present” and “inaccurate” explanations

Changes between rounds on criterion levels


"not present" and "inaccurate"
35
15
-5 Subject Subject Subject Subject Subject Subject PCK Diag- PCK Diag-
Matter
-25 Knowl- Matter Matter Matter Matter Matter nostic Not nostic In-
Knowl- Knowl- Knowl- Knowl- Knowl- present accurate
Percentage change between rounds

edge Pro- edge Pro- edge Con- edge Con- edge edge
cedural cedural ceptual ceptual Aware- Aware-
Not Inaccurate Not Inaccurate ness Not ness Inac-
present present present curate

Grade 14.22068 - 25.95619 - 3.023501 - 4.659225 -


3-6 10170055 5.828103 89264832 8.311804 89279429 8.373027 6061201 2.041653
37252499 72643887 34370942 16154822

Grade 5.760582 1.436078 27.69841 - 7.554725 - 1.430440 -


7-9 01058201 93607894 26984127 11.32614 97577861 9.288372 90412512 23.89218
75761476 44626718 96816634

These two criterion levels are the “weaker” levels and thus an increase here represents a
weakening while an increase represents strengthening between rounds. The absence of
mathematical content in an explanation increased across all criteria for both grade
groups but the presence of inaccurate texts decreased more markedly for the Grade 7-9
group, which suggests that the Grade 7-9 group held better quality when they worked
on their own without group leaders.

The graphs below show the changes between rounds in the number of texts
demonstrating texts which demonstrate incomplete mathematical explanations “partial”
and texts which demonstrate full mathematical explanations (“full”) in the procedural,
conceptual, awareness and diganostic criteria. The first bar (blue) in each pair represents
the Grade 3-6 group and the second bar (red) represents the Grade 7-9 group. Positive
changes represnet an increase between Round 1 and Round 2 while negative changes
represent a decrease between Round 1 and Round 2. The actual changes in percentages
betweeen rounds are given in the table below the graph.

89
Figure 29: Grade 3-6 and Grade 7-9 changes in “partial” and “full” explanations

Changes between rounds on criterion levels


"partial" and "full"
35
25
15
5
-5 Subject Subject Subject Subject Subject Subject PCK Diag- PCK Diag-
-15 Matter Matter Matter Matter Matter Matter nostic Par- nostic Full
Percentage change between rounds

-25 Knowl- Knowl- Knowl- Knowl- Knowl- Knowl- tial


edge Pro- edge Pro- edge Con- edge Con- edge edge
cedural cedural ceptual ceptual Aware- Aware-
Partial Full Partial Full ness Par- ness Full
tial
Grade 14.61709 - 9.226933 - 14.17001 - - 2.915729
3-6 70477806 23.00967 45664675 26.87132 04820496 8.820485 5.533302 7490749
46922611 76566911 03113449 19364678

Grade 12.50214 - - - 3.227182 - 28.27757 -


7-9 5002145 19.69880 6.657371 9.714893 1745506 1.493535 03828335 5.815821
5948806 65737166 46489346 70406201 60529529

These two criterion levels are the “stronger” levels and thus an increase here represents
a strengthening while a decrease represents weakening between rounds. The lack of
completeness in mathematical explanations increased across all criteria for both grade
groups but less so for the Grade 7-9 group, while the fullness of correct explanations
decreased but again less so for the Grade 7-9 group, which again suggests that the
Grade 7-9 group held better quality when they worked on their own without group
leaders.

90
4.2.8 Comparative strengths and weaknesses according to content area
The findings in relation to content areas in which groups were strongest and weakest are
summarised in Table 25 below. Strength was measured by taking the content area where
higher level explanations (partial or full) were the strongest in the two rounds. An
analysis of the changes between Round 1 and 2 for the various categories and types of
explanations offered shows that the groups explanations were most often stronger in the
area of number than in any other of the mathematical content areas.

Weakness was measured by taking into account where lower level explanations (not
present or inaccurate) were the most common in both rounds, or were the highest in
Round 2. There was not one single content area where weakness was demonstrated
consistently more than in other areas. Weakness was evident in the areas of data, algebra
and shape.

Table 25: Content areas according to strength and weakness in explanations

Text Criteria Strong content area Weak content area

Procedural Number Data


explanation of
Correct Answer correct answer
text Conceptual Number Algebra
explanation of
correct answer

Awareness of error Number Shape

Diagnostic Measurement Shape


reasoning
Error text Multiple Shape Algebra
Explanations

Use of everyday Number, n/a


Measurement

91
Section Five: General findings

92
5.1 What are the findings about teachers reasoning about
learners’ errors?
This section begins with a short qualitative analysis of teachers’ experience of the error
analysis activity. In this we report on three common perceptions of teachers when
interviewed on their experiences of the error analysis activity and its influence on their
practice.

In view of the division between the six criteria we used to evaluate teachers’ knowledge
of the items (correct solution and errors) we summarize the findings in two parts. We
first look at what the data shows about the three criteria that mapped against Ball’s et al
two Subject Matter Knowledge (SMK’s) domains. We then look at the three criteria that
mapped against one of the domains Ball et al’s specify for Pedagogical Content
Knowledge (PCK). We do this in order to look at the quality of teachers’ reasoning both
in relation to the correct solution and in relation to the error. We answer the question:

 On what criteria is the groups’ error analysis weaker and on what criteria is the
groups’ error analysis stronger?

We proceed by looking at the difference between grouped grades’ (3-6 and 7-9)
performance on the different criteria between rounds. We answer the question:

 Is there a difference between primary (Grade 3-6 groups) and high school (Grade 7-9
groups) teachers in sustaining the performance on the Round 1 error analysis
activity?

Although this was not a key focus of the project, in view of the analysis of teacher’s
knowledge we examine the differences in performance of the groups across different
mathematical content areas. We answer the question:

 In which mathematical content areas do the groups produce better judgement on the
error?

Given the model of teacher development in which groups were working with group
leaders with academic we tentatively suggest the areas in which they contributed to the
groups in Round 1. We answer the question:

 What does the change in performance between Round 1 and 2 suggest about the role
of group leaders?

93
5.2 Teachers’ experiences of the error analysis activity
In the various interviews conducted throughout the project, teachers, subject facilitators
and group leaders confirmed that the error analysis activity was interesting and
worthwhile. The different participants emphasized repeatedly that working with
questions actually made a difference to them- it made them aware that questions can be
asked in different ways, and they started thinking about asking their questions in a
different way. Before, they said, they knew what answer they wanted to get and created
a question to get to this answer, but they didn’t really think that some formulations of
the same question are better than others. Teachers found it interesting to examine the
ICAS questions, to see what multiple choice questions are like, how they work, what
kind of information they make available and how. Many of them took ICAS questions
and tried them in their classes and came back and reflected on their learners’
performance. They also enjoyed very much creating their own questions, testing their
learners and analysing the errors, as can be seen in the following three excerpts from the
focus group discussion after the activity.
We didn’t know how they came up with the answer. We had no clue. And the only way
(laughter) we thought we would have a clue … clue is to let the kids do it … and see what
they came up with and then ask them why they chose that answer. That is the only way,
because we had no clue. It was this funny picture with stars all over the show and it said,
which of the following is rotational symmetry and none of them actually had rotational
symmetry if you removed one. So, to us it didn’t make sense and we thought we’d throw it
back to the kids.

I did the same with the grade 9s. There was one question we couldn’t get to the answer, we
got the correct answer but why did, because there was I think 32% chose this specific wrong
answer. So I took it to my grade 9 classes, three of them, and then just for the fun of it, I gave
it to my grade 10 classes, two of the grade 10 classes. And the responses was…and then just
to be really now getting to the bottom of this, I gave it to my grade 8 son at home. And it was
very interesting coming from the students sitting in front of you.

I also did it with the grade 3s for quite a few questions and I got them to write at the back
how did they get to the answer. I didn’t ask them to explain it to me, I told them to write it
down. And if they had to draw it, then they drew it, if they had to colour in, they coloured in.
And it was very interesting. And many, many of them were able to explain to us how they
got to it, and some of them actually said there I guessed because I don’t know. They wrote: I
guessed because I don’t know.

In what follows we record other teachers’ comments about what they feel they learned
from doing the error analysis activity.

First, teachers reported that they understand that they need to gain a far deeper
understanding of the questions they set for their learners. Although this sounds obvious,

94
teachers reported that their approach to asking questions was simple or even un-
thoughtful before the error analysis.
Well, I don’t think I’ve ever thought about the way we ask questions before. You know, you
just do it because it’s been done like that for so many years. So now for the first time you start
thinking: but, maybe I can ask it in a different way? Maybe there is still something wrong
with the question, how it’s been put itself.

Teachers suggested that before this activity they did not have a way to think about test
or activity questions: “I think we all just set a question: now work out the perimeter of
this, and, you know, that’s it.” They wanted an answer to the question but did not know
how to think about the question in depth: “I don’t think as a maths teacher when we
originally set any test that you really analyse and probe in depth into the question… .”
The teachers felt that together, the mapping activity (see Process Document) and the
error analysis activity, showed them a way to examine the conceptual depth of
questions.

In the following two quotations, the respective teachers point to two related matters of
conceptual depth. The first quotation refers to the idea that questions draw on more
than one concept. Even though a question may have a specific conceptual focus, it could
draw on a number of related concepts and so it is essential that teachers unpack
questions when testing learners. Precisely because questions draw on webs of concepts,
learners may misunderstand these questions, particularly when the focus of the question
is not clear.
And maybe to add onto that, I think one other thing that came out of this is that you look
at a question and try to go…or maybe try to go back to school and say, what skills do
they need? What concepts do they need in order for them to, like she say, there are
multiple skills in one question there, what is it that they need to know in order to answer
this question? What skills? For instance, you might discover that is a story sum but we
need Pythagoras there. Some skills that they need to identify that they need to answer the
question.

Whilst the first quotation raises the conceptual background of questions which has
implications for learners’ preparedness, the next quotation raises the importance of
knowing what the question is in fact asking. Does it obscure its focus in ways that
learners, in particular, young and inexperienced ones, may struggle to understand
what’s expected of them. The teacher in the following quote posits direct and useful
questions that she now asks when formulating a question:
I think one other thing is that when we look at the questions now and probably now
when I look at a question that I set as well, I think about what is it I want from this, what
do I want them to do, and is my question going to allow it? And is it set in such a way
that they can see the steps that they need to do, or they understand the various
operations that are required? Especially at grade 3 level… . And now when I look at a

95
question and especially when it comes to problem solving and things like that I look at
that specifically to see, is it going to give us the answer we want, and are the kids going
to understand what’s expected of them? Ok, that’s one thing for me to set the question
and I understand what I want them to do, but do they understand?

Second, In addition to thinking about questions more deeply, teachers reported that
their orientation to learners’ approach to questions has changed. Doing the error
analysis activity, trying to figure out what the learners were thinking when they chose
the correct answer or the distractor, has given them a lens to think about learners’
reasoning:
I also thought that I should encourage my children to explain what they say. I should ask
them WHY? if they give me an answer… Not just leave it at that.

This is very different to what teachers were used to do:


We only give learners corrections where they needed corrections. I must say, we never
tell them there’s a misconception here, this is how you were supposed to do it, we just
give the corrections.

To most teachers the notion of “misconception” was new and many struggled to
develop productive relationship with the idea. So whilst the literature distinguishes
misconceptions from errors and posits that misconceptions are part of learning, for
many of the teachers, a misconception remained something that they need to foresee and
avoid. In almost every interview, the interviewer needed to remind the teachers that a
misconception is a mathematical construct that develops through learning the
development of which is not necessarily the teacher’s fault. More than that,
misconceptions can be a productive tool for teachers to deepen their understanding of
learners’ thinking and inform their teaching. Even towards the end of the project, the
teachers tended to treat misconceptions as errors that needed to be avoided:
Interviewee: What I can say, when we plan, so we try by all means to avoid the
misconception side.
Interviewer: To avoid it?
Interviewee: To avoid it. I think we try to avoid it.
Interviewer: Why would you avoid it?
Interviewee: Because it will distract the learners. It will confuse them. So I think that will
confuse them, I don’t think we can rely on it. You know, part of good teaching is that you
make people aware of the pitfalls. So no, I don’t think you’d avoid it entirely, but being
made aware of the pitfalls is different to putting up a booby trap. And I think that’s what
she’s saying is don’t put booby traps in the way. Does that make sense?

When asked to reflect on how the error analysis activity helped the teachers in planning
their lessons and in teaching the planned lesson, teachers acknowledged its influence in

96
two ways. In the following quotation the teacher speaks about awareness. Overcoming
misconceptions, she says “was right up front…the whole time”:
I would say awareness. If one is aware of the distractors that we identified, that brought
it to mind, and I think that we were going through it and when we were drawing up the
worksheets and drawing up the things, we did have that at the back of our minds and
saying, well let’s not fall into the same trap of having those distractors. And of course as
we were designing the learning program, the whole focus was on overcoming the
misconception. So it was right up front, it was there the whole time that was our
motivation the whole time, was to overcome that misconception of the equal sign.

This is an important point in that it suggests that in addition to following a syllabus or a


plan made by an HOD, the teacher feels that there is another lens that informs her
practice- the challenge to try and overcome misconceptions.
I mean, you’re given your frames and your time works, and your milestones and
whatever type of thing, but when you actually sit down and plan the lesson, maybe
foremost in our mind it should actually be, how am I going to teach this that the learners
are going to not make these errors and understand it properly, and if you’re aware of
what those errors are, then you can address them. But that also comes with experience, I
suppose. So, if I plan a lesson and I see where they’ve made the errors, the next time I
plan that lesson I might bring in different things to that lesson, bearing in mind what
those errors were. So adapting and changing your lessons. You might even re-teach that
lesson again.

I also think that in the error analysis you found where their misconceptions were and
then you had to find some way of addressing it, so the point of the lesson plan is to now
go and see how you can practically correct these misconceptions….

The reported experiences cited above are interesting and certainly encouraging.
Notwithstanding, they must be taken with caution and in relation to the quantitative
analysis that follows.

5.3 Findings from the quantitative analysis


On what criteria is the groups’ error analysis weaker and on what criteria is the groups’ error
analysis stronger?

Finding 1: Groups drew primarily on mathematical knowledge and less so on other


possible explanations to explain the correct answer or the errors.
When analysing the correct answer and the error, in about 70%-80% of the texts, groups
drew primarily on mathematical knowledge and much less so, on other discourses. In
both rounds the groups tried hard not to resort to common teacher talk on error such as
test-related explanations (the learners did not read the question well, or the learners
97
guessed) or learner-related (the question is not within the learners’ field of experience)
or curriculum-related (they have not learned this work). Interestingly, this finding is
aligned with groups’ use of everyday knowledge in explaining errors: Despite the
almost mandatory instruction by the NCS to link to everyday experiences when
explaining mathematical concepts, the findings on this criterion point to a very different
reality, with groups hardly referring to everyday contexts in their explanations of
mathematical errors. The use of everyday as part of error explanations was minimal
across all grades and all content areas. It was more evident in Round 1, when the groups
worked with expert leaders, albeit in 12% of the texts only. Only this small number of
texts in the sample demonstrates explanations of errors that make clear the link between
an everyday phenomenon and the mathematical content of the item, and thus shows a
use of ‘everyday’ that enables mathematical understanding. In Round 2 it was barely
evident at all.28

Finding 2: Most of groups’ explanations of the correct solutions and of the error are
incomplete, missing crucial steps in the analysis of what mathematics is needed to
answer the question.
Most of the explanations texts fall within the “incomplete” category, more so in Round
2. 57% of the procedural explanations of the correct answer, 45% of the conceptual
explanations of the correct answer, and 50% of awareness of error explanations are
incomplete. This is an important finding about the teachers’ knowledge. The evidence of
teachers using predominantly incomplete procedural explanations is worrying.
Incomplete procedural explanations of the mathematics involved in solving a particular
mathematic problem, may impede on teachers’ capacity to identify mathematical errors,
let alone to diagnose learners’ reasoning behind the errors.

Finding 3: There is a correlation between groups’ procedural and conceptual


explanations
The correlation between procedural explanations and conceptual explanations was high
in both rounds, although it decreased in Round 2 (r = 0,74 in Round 1 and r = 0,66 in
Round 2). This suggests that when teachers are able to provide a full explanation of the
steps to be taken to arrive at a solution they are also more aware about the conceptual
underpinnings of the solution and vice versa. The weaker the procedural explanations
are, the weaker the conceptual explanations, and vice versa. The higher correlation in
Round 1 than in Round 2, between procedural and conceptual explanations suggests

28
It is possible that this finding is specific to error analysis of learners’ response on (predominantly) multiple choice
questions, where three feasible mathematical responses are given. More work is being done now to interrogate this
finding, specifically, with regard to the teachers’ analysis of learner’s errors “own test”, which did not include any
multiple choice questions, and during teaching. Further examination of this findings is done in relation to teachers’ ways
of engaging with error, during teaching (referring to the 4th domain of teacher knowledge, Knowledge content and teaching
(KCT) see table 1.
98
that when working with the group leaders teachers’ unpacked the concepts underlying
the procedures more consistently.

Much research in South Africa suggests that teachers use procedural and not enough
conceptual explanations (Carnoy et al, 2011) and that this may explain learners’ poor
performance. A new and important lesson can be learned from the correlation between
procedural and conceptual explanations of the correct answer. The correlation suggests
that there is conceptual interdependence between the procedural and conceptual aspects
in teachers’ explanation of the mathematics that underlie a mathematical problem. When
the quality of teachers’ procedural explanations goes higher, the quality of the
conceptual explanations is also improved (and vice versa). This suggests that
mathematical explanations of procedure and of concept rely on good subject matter
knowledge of both, and a good procedural knowledge can help teachers in building
their understanding of the underling concept. Instead of foregrounding the lack of
conceptual understanding in teachers’ mathematical knowledge, more effort is needed
to improve the quality of teachers’ procedural explanations, making sure that teachers
are aware of which steps are crucial for addressing a mathematical problem and what
counts as a full procedural explanation. (Examples of the range of procedural and
conceptual explanations can be found in Appendix 6 and Appendix 7.)

More research is needed to differentiate between strong and weak procedural


explanations, in general and in different mathematical content areas. More research is
needed to understand the quality of the relationship between full(er) procedural
explanations and conceptual understanding. This is important for building data-base for
teachers on crucial steps in explanations of leverage topics. We believe that this will help
the effort of building mathematics knowledge for teaching.

Finding 4: With more practice groups’ explanations demonstrated decreased


inaccuracy, but at the expense of quality in other categories of explanations within the
three criteria of Subject Matter Knowledge
In all the Subject Matter Knowledge (SMK) explanations there was decreased inaccuracy
(inaccurate explanations) in Round 2. In fact this seems to be the category of explanation
in which they improved the most. This improvement needs to be understood
relationally, in the context of the three remaining categories within each of the three
SMK criteria. In what follows we look at these three criteria.
 Procedural explanation of the correct answer: There are very few inaccuracies when
groups describe the steps learners need to take to arrive at the correct solution (8% of
the texts in Round 1 and 5% of the text in Round 2). In Round 2 when the groups
worked without leaders they kept a similar level of accuracy but when they were not
sure about the steps needed to be followed to arrive at the correct solution

99
explanation, they did not mention procedure in their explanation (in Round 2 these
explanations increased from 3% to 16%, a significant increase). The majority of
correct answer procedural explanations in both rounds were stronger. The ratio
between weaker and stronger procedural explanations of the correct answer in
Round 2 was maintained in favour of stronger explanations, in particular of correct
but incomplete explanations (57% of the procedural explanations of the correct
answer in Round 2 were correct but incomplete). These findings point to groups
engaging meaningfully with procedural explanations of the correct answer. That
said, in Round 2 the number of full procedural explanations decreased (from 46% in
round 1 to 23% in round 2). Taking these findings together, the following
observation can be made: The significant decrease of full procedural explanations
and the significant increase of explanations without any and/or with less information
on procedure suggest that when groups worked without their group leaders their
performance on procedural explanations of the correct answer was weaker.

 Conceptual explanations of the correct answer: Maintaining accuracy in conceptual


explanations of the correct answer was more difficult for the groups. There was a
significant decrease in inaccurate conceptual explanations of the correct answer
between rounds (17% in Round 1 to 10% in Round 2). This change was at the
expense of a significant increase in explanations which had no conceptual links (5%
in Round 1 increased to 29% in Round 2). About 30% of the explanations in Round 2
offered no conceptual links (almost double of the equivalent procedural explanations
category). Notwithstanding the quantitative difference, the trend between these two
explanations is similar: In Round 2, when the groups worked without leaders, when
they were not sure about the correct mathematical explanation, they tended to
provide explanations without conceptual links. Similarly to procedural explanations
of the correct answer, the majority of the conceptual explanations in both rounds
were stronger. The ratio between weaker and stronger conceptual explanations of
the correct answer in Round 2 was maintained in favour of stronger explanations.
These findings point to groups engaging meaningfully with conceptual explanations
of the correct answer. That said, in Round 2 the number of full conceptual
explanations decreased (from 40% in Round 1 to 16% in Round 2). Taking these
findings together, the following observation can be made: The significant decrease in
full conceptual explanations, the insignificant improvement of partial conceptual
explanations, and the overall increase of weak explanations in Round 2, suggest that
when groups work without their group leaders their performance on conceptual
explanations was weaker.

It was also weaker than their performance on procedural explanations. In both


rounds the groups started with 40% texts with full explanations (procedural and
conceptual). The decrease in Round 2 in both of these explanations was significant,
but higher in conceptual explanations. Whilst close to 60% of the procedural

100
explanations of the correct answer were accurate but incomplete, their parallel in
conceptual was less than 50%.

 Awareness of error: In Round 2, when the groups worked without a group leader,
they significantly reduced the number of inaccurate texts (or incomplete and hence
potentially confusing explanations). Only 6% of Round 2 awareness of error
explanations was inaccurate. This figure is consistent with the very low number of
inaccurate procedural explanations of the correct answer in Round 2 (5%). As in the
above subject matter knowledge-related explanations of the correct answer, groups
decreased inaccuracies in their explanations of the error, which is an important
improvement across these three criteria.

On two categories of awareness explanations, the groups’ performance did not


change: In both rounds, 20% of the texts offered no mathematical awareness of the
error (this finding is consistent with the percentage of non-feasible explanations
found in Round 1 and 2 texts, which was about 20%, see below). Second, in both
rounds the number texts that demonstrated a full awareness of errors (in which the
explanation of the error is mathematically sound and suggests links to common
errors) remained at about 25% of the total error-related texts. Other than reduction of
inaccuracy, a notable change in Round 2 was found in the 3rd category of this
criterion: In 50% of the error texts in Round 2, the explanation of the error was
mathematically sound but did not link to common errors. Taken together, the
following observation can be made: when the groups worked without leaders, the
ratio between stronger and weaker explanations remained in favor of stronger
awareness of error in their explanations. The percentage of stronger explanations
improved mainly due to the increase to 50% of the 3rd category explanation (see
above). Together with the significant decrease of mathematically inaccurate texts, the
groups maintained their performance in Round 2.

Finding 5: There is a correlation between groups’ awareness of error and their


diagnostic of learners’ reasoning behind the error
The correlation between awareness of the mathematical error and diagnostic reasoning
was also high and increased in Round 2 (r = 0,651 in Round 1 and r = 0,71 in Round
2).When groups demonstrate high awareness of the mathematical error (SMK) they are
more likely to give the appropriate diagnosis of the learner thinking behind that error
(PCK).

The correlation between awareness and diagnostic reasoning merits reflection. The
lesson it provides is that when teachers can describe the error, mathematically well
(SMK), they are more likely to be able to delve into the cognitive process taken by the
learners and describe the reasoning that led to the production of the error (PCK).
101
Although this is only a correlational finding, tentatively we suggest that improving
teachers’ mathematical awareness of errors could help improve teachers’ diagnostic
reasoning. Furthermore in view of the finding that teachers’ procedural explanation of
the mathematics underlying the correct answer is itself weak (missing steps,
incomplete), we suggest that the finding that the teachers struggled to describe the
mathematical way in which the learners produced the error, is expected. All of this has
implications for the relationship between SMK and PCK. The correlation between
awareness and diagnostic reasoning, and between procedural and conceptual
knowledge, bring the importance of subject matter knowledge in teaching.

Finding 6: Groups struggled to describe learners’ reasoning behind the error

Close to 30% of the texts in both rounds did not attempt to explain learners’ error and
another 27% of the texts in both rounds described learners’ reasoning without honing in
on the error. Altogether more than 50% of the texts in both rounds demonstrated weak
diagnostic reasoning. About 33% of the texts in both rounds honed on the error but the
description of learner reasoning was incomplete. In both rounds then, in about 90% of
the texts, groups offered no or incomplete explanation of learners’ reasoning. Only 12%
of texts in each of the rounds were systematic and honed in on the learners’ reasoning
about the error. This is the criterion in which groups’ performance was the weakest,
more so (albeit, insignificantly) in Round 1, and proportionally more in the Grade 3-6
group. As in awareness of error explanations, the groups performed better on texts that
do hone in on the error but are incomplete. The weakness in explaining learners’
reasoning is also evident and is consistent with the difficulty of the groups to produce
more than one explanation to explain the error. All groups provided mainly one
mathematically feasible/convincing explanation (with/without general explanations).
60% of the items in Round 1 and 85% of the items in Round 2 demonstrate little use of
multiple explanations.
These two findings suggest that groups struggle to think about the mathematical content
covered by the item from the perspective of how learners typically learn that content.
According to Ball et al (Ball, Thames and Bass and Ball, 2011), this type of thinking
implies “nimbleness in thinking” and “flexible thinking about meaning”. The groups’
difficulty to think meaningfully about rationales for the ways in which the learners were
reasoning and their inability, even in a group situation, to think of alternative
explanations is evident to them lacking these two qualities in the way they approach
errors.

Finding 7: In Round 2 across all criteria the number of texts without a mathematical
explanation increased while the number of inaccurate texts decreased
The number of texts with mathematically inaccurate explanations generally decreased in
Round 2, which suggests that just by doing more items, even without leaders, the
likelihood of producing explanations which were not mathematically flawed improved.
102
Bearing this in mind, it was interesting to note that while inaccurate texts decreased in
number between Rounds 1 and 2, texts that did not include a mathematical explanation
increased. This could be an indication that the group leaders seem to be more useful,
albeit not significantly so, in enhancing awareness of the mathematical concepts and in
focusing the explanations on the mathematical content of the question since in Round 2
when they were absent, a higher number of explanations that did not have mathematical
substance were given.

Is there a difference between primary (Grade 3-6 groups) and high school (Grade 7-9 groups)
teachers in sustaining the performance on the Round 1 error analysis activity?

Finding 8: The Grade 7-9 group held better quality when working without group
leaders than the Grade 3-6 group
An examination of the different performance by the two sets of grouped grades in the
weaker and stronger level criteria both point to the better quality held by the Grade 7-9
group when they worked without a group leader in Round 2. One particularly notable
change was that in Round 2 on the diagnostic reasoning criteria there was a high
decrease in inaccurate mathematical explanations and a corresponding high increase in
correct though partially complete mathematical explanations for the Grade 7-9 group.
This is very different from the general pattern where the decrease in inaccurate
explanations was seen in conjunction with an increase in general texts. This suggests
that the learning in this group was strongest in relation to the diagnosis of error.

Finding 9: Teachers reasoning in relation to mathematical concepts and errors seems


to be strongest in the content area of number while weakness is spread across the
areas of data, shape, algebra and measurement
Number is the content area which is most often taught in schools and so this finding
corresponds with the knowledge expected of teachers. It is interesting to note that
diagnostic reasoning was strongest in the area of measurement and that most multiple
explanations were offered in the area of shape. Use of the everyday in explanations was
done very little, but the two content areas in which most (albeit very little) reference to
the everyday in explanations were number and measurement. These are findings that
could be further investigated.

Data is a relatively “new” content area in the South African mathematics curriculum
which could explain the weakness here while algebra and shape are recognized
internationally as more difficult mathematical content areas.

103
What does the change in performance between Round 1 and 2 suggest about the role of group
leaders?

Finding 10: Group leaders are important


When worked with leaders, groups:
 completed more items,
 provided more full explanations and less partial explanations,
 provided more conceptual explanations and less procedural explanations,
 provided more mathematically focused explanations and less “general texts” types
of explanations,
 unpacked the concepts underlying the procedures more consistently, and
 gave more multiple explanations of errors.

104
5.4 Summary
In what follows we summarise the main findings of the quantitative analysis and draw
conceptual implications for the construct we propose to frame the idea of teacher’s
thinking about error- “diagnostic judgment”.

In both Rounds (more so in Round 2), groups tended to give mathematically accurate
but incomplete procedural and/or conceptual explanations of the correct answer and/or
of the error (SMK-related criteria). This means that groups were able to describe, albeit,
incompletely some of the steps that learners should follow and some of the conceptual
links they need to understand to arrive at the solution, and when the groups identified
the error, their level of mathematical accuracy was high. Only in very few texts were the
groups’ procedural and conceptual explanations of the correct answer found to be
inaccurate. This needs to be taken together with groups being found to give more
“stronger” than “weaker” explanations and maintaining this strength in Round 2.

This suggests that generally teachers’ did not mis-recognise the procedure or the concept
the correct answer consists off, and they could recognise an error. Nevertheless, in all
these three acts of recognition, many of their explanations is incomplete and some were
inaccurate. The implication we draw from this is that in order for teachers to be able to
improve their error recognition and their explanations of the correct answer, they need
to develop their content knowledge so that they will be able to produce fuller
explanations. To repeat, the distinctive feature of teachers’ knowledge, which
distinguishes them from other professionals who need mathematics for their work is
that “teachers work with mathematics in its decompressed or unpacked form” (Ball,
Thames and Bass, 2008b, p400). When teachers’ subject matter knowledge is strong they
will have acquired an ability to judge when and how to move from compressed to
unpacked mathematical knowledge, and how to provide explanations that is both
accurate and full. What this study of error analysis suggests is that teachers’ when
explanations are basically correct their incomplete form may cause confusion.
Incomplete explanations, we suggest signal weak content knowledge.

In the following example we give two explanations given for a grade 8 question. The
first explanation is of the correct answer and it is an example of a basically correct but
incomplete explanation. The second explanation is of the error. It is both inaccurate and
confusing.

105
Table 26: An example of an explanation that is accurate but incomplete

ICAS 2006, Grade 8, Question 10


Correct answer: B
Selected distractor: D
Content Area: Geometry – Rotational symmetry

Teacher explanation of the correct answer


By removing the star in the centre all the other stars
would still be symmetrical.
Limitations of explanation of the correct answer
This explanation is a typical compressed explanation
where the concept is assumed not elaborated.
Furthermore what is included by way of explanation is
that the removal of the centre star is the correct solution
but is not an explanation. Teachers should explain that
learners need to visualise rotation to determine the
centre is not affected by rotation and therefore it can be
removed. This could be based on a broader explanation
of the meaning of rotational symmetry so that the
explanation is generalisable.
Teacher explanation of the error
The learners could have imagined an axis of symmetry
between A and B. they could have then counted out the
number of stars on either side which totals five stars,
therefore D is out.
Limitations of explanation of the error
This explanation further demonstrates the lack of
knowledge of the concept - rotational symmetry. The
counting of the stars on either side of a line relates to
line symmetry and not rotational symmetry. The
position of the imagined line of symmetry (which
would not even be used to find the rotational
symmetry) is not well described

Teachers particularly struggled to try and imagine learners’ reasoning, to offer variety of
explanations, and to meaningfully reflect on the everyday context of questions and link
this to the mathematical context of questions. Groups’ judgment about learners’ thinking
(PCK) was very weak and their ability to approach leaners’ thinking in diversified ways
was also weak. So despite demonstrating a very small percentage of inaccurate
explanations, the aspect of subject matter knowledge that came to the fore in this study
is the incompleteness in groups’ explanations of the error (awareness of error), which

106
seems to correlate with explanations that hone in on learners’ thinking (diagnostic
reasoning). This suggests a relationship of dependence between:
 what teachers can do creatively for learners (explain ideas in a differentiated way,
connect between every day and mathematical contexts of questions, PCK) and
imagine the ways learners think (PCK)
 and the quality of the subject matter knowledge they acquire and develop.

In the following example we give two explanations given for a grade 5 question. In this
example both explanations, of the correct answer and of the error, demonstrate
insufficient content knowledge. In the case of the error text this leads to an inability to
make a full diagnostic judgement.

Table 27: Grade 5 test item explanations illustrating poor diagnostic judgement

ICAS 2007, Grade 5, Question 16


Correct answer: B
Selected distractor: D
Content Area: Numbers and Operations – completing an equation

Teacher explanation of the correct answer


The learner could have subtracted 6 from 21
and divided by 3.
Limitations of explanation of the correct
answer
This explanation offers no conceptual links.
Conceptually an understanding of the equality
of the two sides of an equation underlies the
solution.
Teacher explanation of the error
The learner added 6 and 21 and got 27 and
divided 27 by 3 to get 9
Limitations of explanation of the error
Procedurally this could be exactly what a
learner may have done to reach this incorrect
solution but the explanation does not hone in
on the misconception that may have led to such
procedural activity. This misconception relates
to a lack of understanding of the equality of the
two sides of an equation the learners reposition
the equal sign without adjusting the operations

107
accordingly.

Notwithstanding, three strengths of the project can be recognised in the comparison


between Rounds 1 and 2:
 The main one being that in Round 2, groups gave fewer explanations which were
mathematically inaccurate. This is an important achievement that needs to be
recognised. In Round 2, when groups worked without a group leader they managed
to decrease their inaccuracy across the different criteria. As discussed before, this
improvement within the weaker category of explanations often came at the expense
of an increase in texts that did not include mathematical explanations relevant to the
criteria. We propose that having participated other DIPIP project activities and
listened to feedback, groups were more cautious, and as a result, when they were not
sure (and having no group leader to lean on), they did not give mathematical
explanations.
 Secondly, important to acknowledge as an achievement, that when analysing the
correct answer and the error, in about 70%-80% of the texts, groups drew primarily
on mathematical knowledge and much less so, on other discourses.
 Thirdly, in Round 2, in the three SMK-related criteria, most of the explanations
remained within the two stronger categories.

108
Section Six: Implications for professional
development: developing diagnostic judgement

109
The central predicament of audit culture is that it produces massive amount of data,
which is good for “external accountability” but often remains remote from “internal
accountability”. Elmore who made this distinction notes the following:
Internal accountability precedes external accountability. That is, school personnel
must share a coherent, explicit set of norms and expectations about what a good
school looks like before they can use signals from the outside to improve student
learning. Giving test results to an incoherent, atomized, badly run school doesn't
automatically make it a better school. The ability of a school to make improvements
has to do with the beliefs, norms, expectations, and practices that people in the
organization share, not with the kind of information they receive about their
performance. Low-performing schools aren't coherent enough to respond to external
demands for accountability … Low-performing schools, and the people who work in
them, don't know what to do. If they did, they would be doing it already. You can't
improve a school's performance, or the performance of any teacher or student in it,
without increasing the investment in teachers' knowledge, pedagogical skills, and
understanding of students. This work can be influenced by an external accountability
system, but it cannot be done by that system. (2002, 5-6)

In other words, the important and very complicated condition for achieving the link
between teachers’ tacit knowledge, their motivation to change and performance is
teachers' belief about their competence. This belief is linked directly to their learning
experiences, to meaningful opportunities to learn what specific curriculum standards
actually mean. The argument about internal accountability, first, is that with sufficient
meaningful professional development, teachers can be helped to see that their efforts can
bear fruits (Shalem, 2003). Teachers need to be able to see the reasons for change,
understand its core principles and be convinced that it is feasible and will benefit their
learners. This gives rise to the question of what constitutes a meaningful learning
opportunity for teachers. “Alignment” between formative and summative assessments
is the notion which the international literature uses to describe the reciprocity that is
needed for teacher learning.

International research shows that just having another set of data (in the form of
benchmarking, targets and progress reports) that ‘name and shame’ schools leads to
resentment and compliance but not really to improvement of learning and teaching
(McNeil, 2000; Earl & Fullan, 2003; Fuhrman & Elmore, 2004). In South Africa, Kanjee
(2007) sums up the challenge:
For national assessment studies to be effectively and efficiently applied to improve the
performance of all learners, the active participation of teacher and schools is essential. …
Teachers need relevant and timeous information from national (as well as international)
assessment studies, as well as support on how to use this information to improve
learning and teaching practice. Thus a critical challenge would be to introduce
appropriate polices and systems to disseminate information to teachers. For example,
teacher-support materials could be developed using test items administered in national
assessments. (p. 493)
110
Katz et al (2005) draw an important distinction between “accounting” and
“accountability” (which follows on from Elmore’s distinction between external and
internal accountability). They define the former as the practice of gathering and
organising of data for benchmarking, which is the mechanism that the Department of
Education has put in place in order to ensure the public that it gets a good service for its
investment. They define the latter as “teachers-led educational conversations” about
what specific data means and how it can inform teaching and learning (Earl & Fullan,
2003; Earl & Katz, 2005, Katz et al, 2009). In this view, the international and local
benchmarking tests mentioned above form an “accounting” feature of the South African
educational landscape. Katz et al want to re-cast “accountability”. They want to recast it
from its reporting emphasis to making informed professional judgement, where
judgment is constructed through data-based conversations on evidence that is gathered
from systematic research and from assessments. They argue:
The meaning that comes from data comes from interpretation, and interpretation is a
human endeavour that involves a mix of insights from evidence and the tacit knowledge
that the group brings to the discussion. ... This is at the heart of what’s involving in
determining a needs-based focus. The process begins by tapping into tacit knowledge, by
engaging in a hypothesis generation exercise and recasting “what we know” into “what
we think we know”... Instead of suspending beliefs in the service of data or adamantly
defending unsubstantiated beliefs, the conversation is a forum for examining both and
making the interrelationship explicit. (Katz et al 2009, p28)

“Accountability” as phrased here emphasizes two ideas. First, in order for teachers to
learn from evaluation data what their learners “can and/or cannot do at a particular
stage or grade”29, they need to be engaging their tacit knowledge in some or other of a
structured learning opportunity (“recasting ‘what we know’ into ‘what we think we
know’). Secondly, accountability (in the above sense) constitutes teachers reflection as a
shared process, as a structured conversation between professionals. And so, since these
conversations are structured conversations, “accountability” enables teachers to hold
themselves and each other “to account” by explaining their ideas and actions, to each
other, in terms of their experiences as well as their professional knowledge (Brodie &
Shalem, 2010). In this way, Katz et al argue, accountability conversations can give
participants imagination for possibilities that they do not yet see and help them making
tacit knowledge more explicit and shared. 30 In our project, we used the evaluation data
of proficiency results on the ICAS test as an artefact around which to structure a learning
opportunity to experience a process of diagnostic assessment on learners’ error in
mathematic evaluation data.

29
Media statement issued by the Department of Basic Education on the Annual National Assessments (ANA): 04
February 2011. http://www.education.gov.za/Newsroom/MediaReleases/tabid/347/ctl/Details/mid/1389/ItemID/3148/
Default.aspx
30
For the idea of “Accountability conversations” see Brodie & Shalem, 2011.
111
Research has only recently begun to engage with the question of how to use the data
beyond that of an indicator of quality i.e. beyond benchmarking (Katz, Sutherland, &
Earl, 2005; Boudett, City & Muuname, 2005; Katz, Earl, & Ben Jaafar, 2009). Some
conceptual attempts to examine a more balanced way between accounting and
accountability include Shavelson et al (2002)31, Nichols et al (2009)32, and Black and
Wiliam (2006). In the South African context, Dempster (2006) and Dempster and Zuma ,
(2010), Reddy (2005) and Long (2007) have each conducted small case studies on test-
item profiling, arguing that this can provide a useful data that can be used for formative
and diagnostic purposes.

Arguably, awareness about learners’ errors is a useful step in the process of significantly
improving practice. Hill and Ball (2009) see “analyzing student errors” as one of the four
mathematical tasks of teaching “that recur across different curriculum materials or
approaches to instruction (p70)33. Studies on teaching dealing with errors show that
teachers’ interpretive stance is essential for the process of remediation of error, without
which, teacher simply re-teach without engaging with the mathematical source of the
error, or with it metacognitive structure (Peng, 2009, 2010; Prediger, 2010; Gagatsis &
Kyriakides, 2000).

31
Shavelson et al devised what they call “a logical analysis” that test-designers or teachers can use in order to
“psychologize about the nature of the problem space that a student constructs when confronted with an assessment task”
(2002, p15). They argue that effective error analysis should use a “logical analysis” of task demands together with
“empirical analysis” of the kinds of cognitive processes a task, has in fact, evoked by students. In more common terms the
two steps include tasks analysis followed by an error analysis. In constructing the process of “logical analysis”, Shavelson
et al used a coding scheme consisting of numerous categories that together make up 4 types of knowledge- “declarative”
(knowing that) “procedural” (knowing how) , “schematic” (knowing why) and “strategic” knowledge (knowing when
knowledge applies). These were used to align the type of knowledge intended for Science tasks or their “construct
definitions” with the observations made about “the student’s cognitive activities that were evoked by the task as well as
the student’s level of performance” (p6). Shavelson et al nest this kind of analysis within what they call as “the assessment
square”, a framework that they develop to increase the alignment between the type of knowledge structure to be
measured (the 4 types of knowledge stated above), the kind of task designed for measuring learners’ performance of the
knowledge structure (including the design features of different tasks), the variety of responses elicited by the learners,
and the inferences made on the basis of these analyses. This type of analysis is useful for task analysis; it helps to unpack
the cognitive demands embedded in tasks, their degree of openness and level of complexity.
32
In their work, Nichols et al attempt to create a framework that will link between information from performance on a
particular assessment and student learning. Their model suggests several interpretive acts build in a sequence that
connects between three main data “structures”:
 “student data” or sets of data derived from systemic assessment,
 “knowledge domain model” or the knowledge and skills associated with a learning construct,
 “a student model” or the representation of current understanding of a learner’s or specific class of learners’
knowledge and
 “a teaching model” or a sets of methods that have been shown to be effective in relation to the other domains (the
knowledge and the student domain).
What holds these structures together and gives them meaning are the recurring interpretive acts required by the teacher:
Data interpretation involves reasoning from a handful of particular things students say, do, or make in particular
circumstances, to their status on more broadly construed knowledge, skills, and abilities that constitute the student
model. (p17)
33
The others being: “encountering unconventional solutions, choosing examples, or assessing the mathematical integrity
of a representation in a textbook”. (ibid)
112
In South Africa, Adler sees teachers’ knowledge of error analysis as a component of
what she calls mathematics for teaching (2005, p3). She asks:
What do teachers need to know and know how to do (mathematical problem solving) in
order to deal with ranging learner responses (and so some error analysis), and in ways
that produce what is usefully referred to as “mathematical proficiency”, a blend of
conceptual understanding, procedural fluency and mathematical reasoning and problem
solving skills? (ibid)

From this point of view, teachers’ diagnostic judgment is essential; it is used to make
decisions which affirm learners’ performance, but also to make decisions which classify
learners and select them - or not - for particular futures. What does DIPIP phases 1&2
research suggest diagnostic judgment entails?

Diagnostic Judgement

Prediger’s argument that teachers who have an interest in learners’ rationality are aware
of approaches to learning implies that diagnostic judgment encompasses far more than
reasoning about the way in which learners think mathematically when they answer
mathematics test questions. This implies that diagnostic judgement should include
understanding about learners, such as their contexts, culture, learning styles and
possible barriers to learning. It also implies that diagnostic judgement will look different
in different teaching practices, for example during teaching, when planning a lesson
planning, and when making assessments.

The data of this project suggests that error analysis of mathematical evaluation data
relies on a firm SMK- specifically correctness and fullness of knowledge of the steps to
be followed to arrive at the solution and the most relevant concept underling the
question. The distinction between accuracy and fullness is important. Whilst a general
mathematician can figure out an accurate solution with little attention to the number,
kind and sequence of steps to be followed, for a teacher this is different. Teachers need
to know both- they need to be able to distinguish between explanations that are correct
but are not completely full from those that are correct and full. Teachers need to unpack
what is compressed behind the correct answer or behind an error. This has to do with
their role in socialising learners into doing mathematics. Unlike the general
mathematician or other professionals, teachers need to show learners which explanation
is better and why, what steps are there to be followed, in what sequence and why. As
the analysis of teacher knowledge suggests, without a full knowledge of the explanation,
teachers may recognize the error only partially, as they may not know what the crucial
steps that make up the solution are or their sequence and the conceptual relation
between them. Without thorough knowledge of the mathematical content and
procedures relating to that content (Common Content Knowledge) it is unlikely that
teachers will be able to systematically look for “a pattern in students error” or “size up

113
whether a non-standard approach would work in general” (Specialized Content Knowledge
, see Ball, Thames &Bass, 2008, p400).

The study shows that there is a conceptual interdependence between the procedural and
conceptual aspects in teachers’ explanation of the mathematics that underlie a
mathematical problem. It also suggests that there is interdependence between awareness
of the mathematical error and the quality of teachers’ diagnostic reasoning. Together this
means that diagnostic reasoning is dependent on prior content knowledge, that
following the learner’s reasoning is dependent on teachers having a full picture of what
the question is in fact asking, how is it asking it, what is the underlying concept of the
question, what kind of procedure it relies on and what are the crucial steps that have to
be completed. Only with this knowledge, contextual factors about learners’
misrecognition can add value to how teachers are analysing learners’ errors. So we
propose a vertical form construct to describe teachers’ knowledge of errors: When
teachers can describe the steps and the underlying concepts of a question in full (or
mathematically well), they are more likely to be able to size up the error or interpret its
source (SMK). When teachers can describe the error, mathematically well (SMK) they are
more likely to be able to delve into the cognitive process taken by different kinds of
learners and describe the reasoning that led to the production of an error (PCK).

The idea of developing teachers’ diagnostic Judgement is implied by other research in


South Africa and internationally

The teacher today is required to demonstrate different types of professional knowledge


which range from knowledge of subject matter to recognizing that an answer is wrong,
or that a particular mathematical solution cannot be accepted because it cannot work in
general, to much more subtle kinds of knowledge such as 'knowledge of self'. This kind
of teacher is regarded a specialist for being able to teach subject matter knowledge with
attunement to the curriculum, to the knowledge field and to the learner. This kind of
attunement requires a distinct mode of awareness that is often informed by dispositions
and perspectives that are not readily accessible to teachers in terms of their pre-service
training, more so in South Africa (Adler & Hulliet, 2008, p19). This is consistent with the
works of Nesher (1987), Borasi (1994), Gagatsis and Kyriakides (2000), Kramarski and
Zoldan (2008), Predriger (2010) which show that a focus on learner’s mathematical
thinking when producing errors provides a mechanism for teachers to learn to think
more carefully about their teaching and for learners to change their attitudes towards
errors (Borasi, 1994, p189). Borasi argues that experiencing learning through errors
shifted learners’ conception about the use of errors which until that point were seen
solely as mistakes that need to be remediated. In terms of teaching, Borasi argues, errors
are “a valuable source of information about the learning process, a clue that researchers
and teachers should take advantage of in order to uncover what students really know
and how they have constructed such knowledge” (p170).

114
The development of “formative assessment” opens up new possibilities for teachers
with regard to this. The notion of formative assessment has changed the pedagogical
role of teachers in assessment. The following definition proposed by Black and Wiliam,
explains the idea of “formative”, grounding it in awareness of learners’ errors.
Practice in the classroom is formative to the extent that evidence about student
achievement is elicited, interpreted, and used by teachers, learners, or their peers, to
make decisions about the next steps in instruction that are likely to be better, or better
founded, than the decisions they would have taken in the absence of the evidence that
was elicited. (2009, p9)

At the core of this view is a pedagogical practice that consists of a sequence of activities
structured by the teacher in response to evidence s/he gathers about students’ learning
from classroom and/or systemic assessment. This also includes classroom discussions
amongst learners and their peers and amongst teachers on the meaning of evaluative
criteria, on errors that appear to be common, and on ways of addressing them (See also
Brookhart, 2011). Having these kinds of engaging discussions, Black and Wiliam argue,
will promote self-regulation and autonomy in students and “insight into the mental life
that lies behind the student’s utterances” in teachers (2009, p13). Nichols et al (2009)
offer the idea of “a reasoned argument” which they believe is the key to a successful
process of for formative assessment. For them formative assessment is valid, when it is
used to inform instruction and its use to inform instruction is based on a valid
interpretation:
The claim for formative assessment is that the information derived from students’
assessment performance can be used to improve student achievement. It is how that
information is used, not what the assessment tells us about current achievement, that
impacts future achievement. Therefore, use, based on a valid interpretation, is the primary
focus of the validity argument for formative assessments. (our emphasis, p15)

Nichols et al look for a model of formative assessment that can be trusted as valid. They
look for a model that will ensure that when teachers make judgments about learners’
reasoning behind a mathematical error (about why the learners think in that way, and
how their thinking can be addressed in teaching), they can account for the way they
reasoned about the evidence, and can show coherently how the evidence informed their
instructional plan. Their model was criticized strongly by Shepard 2009, p34) and for
good reasons, what is relevant for us, however, is the idea of “reasoned argument”, of
teachers needing to ask and interpret “what does the evidence from student data say
about students’ knowledge, skills and abilities” (p.16)? This idea is consistent with
Prediger’s idea of “diagnostic competence” (Teachers analyze and interpret learner
reasoning), Kats’ et al notion of “accountability” (ability to account for), and with Black’s
and Wiliam’s idea of formative assessment (evidence about student achievement is
elicited and interpreted). Together, they support the notion of we would like to call as
“diagnostic judgment”. To move away from the instrumentalist use of the current audit

115
culture, of a culture that uses evaluation data to mainly benchmark learner performance
(and to blame and shame teachers), professional development of teachers need to work
out ways to develop in teachers diagnostic judgment. In this we mean: the ability to find
evidence of learning and to know how to work with that evidence. As professionals and
as specialists of a particular subject matter, teachers need to know how to elicit and
reason about evidence of learners’ errors.

Lessons about error analysis of mathematical evaluation data that can be taken from
DIPIP Phases 1 and 2:
The last question we need to address, is given these results, what could DIPIP have done
differently? Here we refer to the process document which describes the linear thinking
that informed the design of its main activities. The groups followed the error analysis
with two different applications- lesson plan and teaching. All the groups worked on
their lesson plans for several weeks and repeated it. Then all groups designed a test and
interviewed some learners on errors identified in their tests. Finally eleven groups
continued into a second round of curriculum mapping and error analysis while some
groups went on to a third round.. The idea behind this design sequence was that once
the groups mapped the curriculum and conducted error analysis, they will re-
contextualised their learning of the error analysis into two core teaching activities. In
other words, they will start with “error-focused” activities and move to “error-related”
activities and then go back to error-focused activities. This will enable them to be more
reflective in their lesson design and in their teaching as in both of these activities the
groups will work with what the curriculum requires (assessment standards) but also
with errors to be anticipated in teaching a particular mathematical topic. This design
sequence was also informed by ideas from the field of assessment, in particular the
emphasis on integration between teaching and assessment - the key construct in the idea
of alignment between standardised and formative assessment.

The above analysis of teachers’ thinking about learners’ errors in evaluation tests
suggests that the idea of sequential development between these activities was too
ambitious. To ask teachers to apply their error knowledge which at most was only
partial in terms of its SMK aspect and very weak in terms of its diagnostic of learners
reasoning was certainly too ambitious, at least for teachers in SA. Teachers needed
repeated practice in “error-focused activities” before they can be expected to transfer
this knowledge into Knowledge of content and teaching (KCT), Ball et al’s 4th domain of
teacher knowledge (or second domain within PCK). The idea was ambitious in two
ways. First teachers needed more repetitive experience in doing error analysis. Error
analysis of evaluation data is new for teachers and they needed to develop their
competence in doing more of the same. They also needed a variation of questions. There
are pros and cons in working on multiple content questions and groups needed to work
with a variety of questions. Secondly and even more importantly, our analysis shows

116
that teachers need to work on errors in relation to specific mathematical content. This is
consistent with Cohen and Ball (1999) who argue that change of teaching practice comes
as support for a certain content area in the curriculum and not vice versa. Teachers
cannot change their teaching practices in all the content areas they are teaching (Cohen
and Ball, 1999, p.9).34 If accepted, the conception of diagnostic judgement that we
propose shows that teachers need to know the content well and think about it carefully
when they are working on errors.

34
Cohen, D. K. and Ball, D. L. (1999) Instruction, Capacity and Improvement CPRE Research Report Series RR-43.

117
118
Recommendations for professional development and
for further research
DIPIP Phases 1 and 2 set out to include teachers in different professional activities all
linked to evaluation data and focused on errors. It had a development and a research
component. In this report we evaluate the error analysis activity and not the whole
process of development this is done in the process report. Based on the experience of
this project activity we make the following recommendations with regard to teacher
development and further research.

Recommendations for professional development


 Error analysis activities should be content-focused.
 Different error-focused activities should be used and coherently sequenced. These
should include error analysis in relation to different forms of mathematical questions
(open and multiple choice); the use of systemic tests and tests set by the teachers;
design of different kinds of mathematical questions, interviewing learners and
design of rubrics for evaluation of learners.
 Error analysis should be done in conjunction with curriculum mapping as this helps
teachers to understand the content and purpose of questions. We suggest that this
should be done with a specific content focus.
 Error analysis activities should be situated within content development
programmes. Structured content input is necessary for teachers in South Africa since
this is a prerequisite for diagnostic judgement.
 In order to avoid compromising on quality modeling of what counts as full
explanations (including all procedural steps and conceptual links) is necessary. At all
times and through all activities the emphasis on full explanations must be made
explicitly which we believe will strengthen teachers’ content knowledge.
 Learning materials for content-focused professional development of error analysis
should be developed. This was one of the aims of DIPIP 1 and 2.
 Expert leadership of groups working on error analysis activities is important to
enable quality instruction of mathematical content, promote depth of analysis and
fullness and variety of explanations. Never-the-less judgement should be made as to
when is the appropriate time to remove this expert leadership.

Recommendations for further research


 More research is needed to identify different qualities of mathematical content
specific procedural explanations. This could inform the teaching of mathematical
content.

119
 Further research could investigate other discourses teachers draw on when they
analyse error on a standardized test and on their own tests.
 A more in-depth study could be done on the use of everyday contexts in the
explanation of mathematical concepts and learners’ errors.
 The role of expert leaders in supporting teacher development through error analysis
needs further investigation.
 More research is needed to inform ways of aligning evaluation data results and
teachers’ practice.

120
References
Adler, J (2005). Mathematics for teaching: what is it and why is it important that we talk
about it? Pythagoras, 62 2-11.
Adler, J. & Huillet, D. (2008) The social production of mathematics for teaching. In T.
Wood S. & P. Sullivan (Vol. Eds) International handbook of mathematics teacher
education: Vol.1. Sullivan, P., (Ed.), Knowledge and beliefs in mathematics teaching
and teaching development. Rotterdam, the Netherlands: Sense Publishers. (pp.195-
222). Rotterdam: Sense.
Ball, D.L. (2011, January) Knowing Mathematics well enough to teach it. Mathematical
Knowledge for teaching (MKT). Presentation for the Initiative for Applied
Research in Education expert committee at the Israel Academy of Sciences and
Humanities, Jerusalem, Israel, January 30, 2011.
Ball, D.L., Hill, H.C., & Bass, H. (2005) “Knowing mathematics for teaching: who knows
mathematics well enough to teach third grade, and how can we decide?”
American Educator, 22: 14-22; 43-46.
Ball, D.L., Thames, M.H., & Phelps, G. (2008) “Content knowledge for teaching: what
makes it special?” Journal of Teacher Education 9 (5): 389-407.
Borasi, R. (1994) “Capitalizing on errors as “springboards for inquiry”: A teaching
experiment in Journal for Research in Mathematical Education, 25(2): 166-208.
Boudett, K.P., City, E. & Murnane, R. (2005) (Eds) Data Wise A Step-by-Step Guide to Using
Assessment Results to Improve Teaching and Learning. Cambridge: Harvard
Education Press
Black, P. & Wiliam, D. (2006) Developing a theory of formative assessment in Gardner,
John (ed.) Assessment and Learning (London: Sage).
Brookhart S M (Ed) (2009) Special Issue on the Validity of Formative and Interim
assessment, Educational Measurement: Issues and Practice, Vol. 28 (3)
Brodie, K. (2011) Learning to engage with learners’ mathematical errors: an uneven
trajectory. In Maimala, T and Kwayisi, F. (Eds) Proceedings of the seventeenth
annual meeting of the South African Association for Research in Mathematics,
Science and Technology Education (SAARMSTE), (Vol 1, pp. 65-76). North-West
University, Mafikeng: SAARMSTE.
Brodie, K. & Berger, M (2010) Toward a discursive framework for learner errors in
Mathematics Proceedings of the eighteenth annual meeting of the Southern
African Association for Research in Mathematics, Science and Technology
Education (SAARMSTE) (pp. 169 - 181) Durban: University of KwaZulu-Natal
Brodie, K. & Shalem, Y. (2011) Accountability conversations: Mathematics teachers’
learning through challenge and solidarity, Journal of Mathematics teacher Education
14: 420-438

121
Dempster, E. ( 2006). Strategies for answering multiple choice questions among south
African learners: what can we learn from TIMSS 2003, in Proceedings for the 4th
Sub-Regional Conference on Assessment in Education, UMALUSI.
Dempster, E. & Zuma, S. (2010) Reasoning used by isiZulu-speaking children when
answering science questions in English. Journal of Education 50 pp. 35-59.
Departments of Basic Education and Higher Education and Training. (2011). Integrated
Strategic Planning Framework for Teacher Education and Development in South
Africa 2011-2025: Frequently Asked Questions. Pretoria: DoE.
Earl, L & Fullan M. (2003) Using data in Leadership for Learning (Taylor &Francis Group:
Canada)
Earl, L., & Katz, S. (2005). Learning from networked learning communities: Phase 2 key
features and inevitable tensions.
http://networkedlearning.ncsl.org.uk/collections/network-research-series/
reports/nlg-external-evaluation-phase-2-report.pdf.
Elmore, RE. 2002. Unwarranted intrusion. Education next. [Online]. Available url:
http://www.educationnext.org/20021/30.html
Fuhrman SH & Elmore, RE (2004) Redesigning Accountability Systems for Education.
(Teacher College Press: New York).
Gagatsis, A. and Kyriakides, L (2000) “Teachers’ attitudes towards their pupils’
mathematical errors”, Educational Research and Evaluation 6 (1): 24 – 58.
Gipps, C. (1999) “Socio-cultural aspects of assessment”. Review of Research in Education
24. pp 355-392.
Heritage, M., Kim, J., Vendlinski, T., & Herman, J. (2009), From evidence to action: A
seamless process in formative assessment? Op cit pp. 24-31. In S. Brookhart (Ed)
(2009) Special Issue on the Validity of Formative and Interim assessment,
Educational Measurement: Issues and Practice, Vol. 28 (3)
Hill, H.C., Ball, D.L., & Schilling, G.S. (2008a) “Unpacking pedagogical content
knowledge: conceptualizing and measuring teachers’ topic-specific knowledge of
students” Journal for Research in Mathematics Education 39 (4): 372-400.
Hill, C. H., Blunk, M.L., Charalambos, C.Y., Lewis, J.M., Phelps, G.C., Sleep, L &Ball, D.L
(2008b) Mathematical knowledge for teaching and the mathematical quality
instruction: an exploratory study. Cognition and Instruction 26 (4) 430-511.
Hill, H.C. & Ball, D.L. (2009) “ The curious- and crucial- case of mathematical knowledge
for teaching Kappan 9 (2): 68-71.
Katz, S., Sutherland, S., & Earl, L. (2005). “Towards an evaluation habit of mind:
Mapping the journey”, Teachers College Record, 107(10): 2326–2350.
Katz, S., Earl, L., & Ben Jaafar, S. (2009). Building and connecting learning communities: The
power of networks for school improvement. Thousand Oaks, CA: Corwin.

122
Kramarski, B. & Zoldan, S. (2008). Using errors as springboards for enhancing
mathematical reasoning with three metacognitive approaches. The Journal of
Educational Research, 102 (2), 137-151.
Louw, M. & van der Berg, S & Yu, D. (2006) "Educational attainment and
intergenerational social mobility in South Africa Working Papers 09/2006,
Stellenbosch University, Department of Economics
Looney, J. W. (2011). Integrating Formative And Summative Assessment: Progress
Toward A Seamless System? OECD Education Working Paper No. 58 EDU/WKP
4 http://essantoandre.yolasite.com/resources.pdf
Long, C. (2007) What can we learn from TIMSS 2003?
http://www.amesa.org.za/AMESA2007/Volumes/Vol11.pdf
McNeil, LM. (2000) Contradictions of school reform: Educational costs of standardized testing.
New York: Routledge.
Muller, J. (2006) Differentiation an Progression in the curriculum in M. Young and J.
Gamble (Eds). Knowledge, Curriculum and Qualifications for South African Further
Education. Cape Town: HSRC Press.
Nesher, P. (1987) Towards an instructional theory: The role of learners’ misconception
for the learning of Mathematics. For the Learning of Mathematics, 7(3): 33-39.
Nichols, P D Meyers, J L & Burling, K S (2009) A Framework for evaluating and
planning assessments intended to improve student achievement in S. Brookhart
(Ed), (2009), op cit pp. 14–23.
Peng, A. (2010). Teacher knowledge of students’ mathematical errors.
www.tktk.ee/bw_client_files/tktk_pealeht/.../MAVI16_Peng.pdf
Peng, A., & Luo, Z. (2009). A framework for examining mathematics teacher knowledge
as used in error analysis. For the Learning of Mathematics, 29(3), 22-25.
Prediger, S (2010) “How to develop mathematics-for-teaching and for understanding:
the case of meanings of the equal sign” , Journal of Math Teacher Education 13:73–
93.
Radatz, H. (1979) Error Analysis in Mathematics Education in Journal for Research in
Mathematics Education, 10 (3): 163- 172 http://www.jstor.org/stable/748804
Accessed: 08/06/2011 05:53.
Reddy, V. (2006) Mathematics and Science Achievement at South African Schools in
TIMSS 2003. Cape Town: Human Sciences Research Council.
Rusznyak, L. (2011) Seeking substance in student teaching in Y. Shalem & S Pendlebury
(Eds) Retrieving Teaching: Critical issues in Curriculum, Pedagogy and learning.
Claremont: Juta. pp 117-129
Shalem, Y. (2003) Do we have a theory of change? Calling change models to account.
Perspectives in Education 21 (1): 29 –49.
Shalem, Y., & Sapire, I. (2011). Curriculum Mapping: DIPIP 1 and 2. Johannesburg: Saide

123
Shalem, Y., Sapire, I., Welch, T., Bialobrzeska, M., & Hellman, L. (2011). Professional
learning communities for teacher development: The collaborative enquiry process in the
Data Informed Practice Improvement Project. Johannesburg: Saide. Available from
http://www.oerafrica.org/teachered/TeacherEducationOERResources/SearchResults/
tabid/934/mctl/Details/id/38939/Default.aspx

Shalem, Y. & Hoadley, U. (2009) The dual economy of schooling and teacher morale in
South Africa. International Studies in Sociology of Education. Vol. 19 (2) 119–134.
Shalem,Y. & Slonimsky,L. (2010) The concept of teaching in Y. Shalem & S Pendlebury
(Eds) Op cit. pp 12-23
Shavelson, R. Li, M. Ruiz-Primo, MA & Ayala, CC (2002) Evaluating new approaches to
assessing learning. Centre for the Study of Evaluation Report 604
http://www.cse.ucla.edu/products/Reports/R604.pdf
Shepard, L A (2009) Commentary: evaluating the validity of formative and Interim
assessment: Op cit pp. 32-37.
Smith, J.P., diSessa A. A., & Roschelle, J. (1993) Misconceptions Reconceived: A
Constructivist analysis of Knowledge in Transition. Journal of Learning Sciences
3(2): 115-163.
Star, J.R. (2000). On the relationship between knowing and doing in procedural learning.
In B. Fishman & S. O’Connor-Divelbiss (Eds), Fourth International Conference of the
Learning Sciences. 80-86. Mahwah, NJ: Erlbaum.
Taylor, N. (2007) “Equity, efficiency and the development of South African schools”. In
International handbook of school effectiveness and improvement, ed. T.
Townsend, 523. Dordrecht: Springer.
Taylor, N. (2008) What’s wrong with South African Schools? Presentation to the
Conference What’s Working in School Development. JET Education Services 28-29.

Van den Berg, S. (2011) “Low Quality Education as a poverty trap” University of
Stellenbosch.
Van den Berg, S. & Shepherd, D. (2008) Signalling performance: An analysis of
continuous assessment and matriculation examination marks in South African
schools. Pretoria: Umalusi

124
Appendices:

125
Appendix 1
Exemplar template- Error analysis – used in group leader training prior to the activity
Based on ICAS 2006, Grade 8 Question 22

Part A: Details
Grade 8
Question 22
Content Solve a simple linear equation that has one
unknown pronumeral
Area Algebra
Difficulty 27

Part B: Learner responses (%)


A 20
B 17
C 28
D 33
Correct answer A
Analysis More learners chose incorrect answers C and
D, together constituting three times the
correct choice A.

126
Part C: Analysis of correct answer

Correct Answer A – 20%


Indicate all possible methods
1. Using the distributive law multiply ½ into the binomial. Take the numbers onto
the RHS, find a LCM, multiply out and solve for y.
2. Find an LCM which is 2. Multiply out and solve for y.
3. Substitution of 13 on a trial and error basis.

In order to get this answer learners must know:


1. Learners need to know\how to determine a lowest common denominator (LCD)
and/or LCM.
2. Learners need to understand the properties of an equation in solution of the
unknown variable.
3. Learners need to know the procedure of solving an equation involving the RHS
and LHS operations.

Part D: Analysis of learners’ errors

For answer D – 33%


Indicate all possible methods
½(y+3) = 8
½ ½
y+3 =4
y =1

Possible reasons for errors:


Learners do not understand the concept of dividing fractions. They
misconceive 8 divided by a half as a half of 8. The answer that they get is
4.

For answer C – 28%


Indicate all possible methods
a.
½(y+3) = 8
½
y+3 =4
127
y =7
or
b.
½ (7+3)=8
7+3 =8+2
or
c.
½ (y+3)=8
½ y+6 =8
½ y = 14
y =7

Possible reasons for errors:


In the first method the learners made the same error of dividing by a
fraction but did not change the sign when taking the terms over
(LHS/RHS rule)
Substitution of the value 7.
In this method the learners did not multiply every term by the LCM (2).
They also did not change the sign of the 6 when taking it over.

For answer B – 17%


Indicate all possible methods
a.
½ (y+3)=8
y+6 =16
y =10

b.
½ (10+3)=8
5+3 =8

c.
½ (y+3)=8
½ y =8-3
y =10

Possible reasons for errors:


Learners have made errors in the order of operations, substitution and the
distributive law.
128
Other issues with task
 The instruction could have read, “Solve for y”

Issues for teaching


 Learners need to be able to check solutions of equations.
 Instructions to the learners in the classroom differ to the instructions of the
task.
 Distributive Law
 Substitution
 Principles of rules/laws of equations i.e. LHS=RHS

129
Appendix 2
Template for DIPIP error analysis of ICAS task analysis (In both Round 1 and 2)
(2008 error analysis – All groups Grades 3,4,5,6,7,8,9)
(2010 error analysis – One group from Grades 3,4,5,6,8)

Part A: Details
Grade
Question
Content/Description
Area/LO

Part B: Learner responses (%)


Anticipated Expectation:
achievement
(Before you check
Comment on any particular distractors:
the actual
achievement)
A
B
C
D
Correct answer
Difficulty
Analysis

Part C: Analysis of correct answer


The way(s) of thinking that the learner might use in order to get the question correct
(fill in more than one possible way of thinking, if necessary):

Indicate for each method whether you think it is at the expected level for the grade OR
higher OR lower than the group would expect.

130
1.

2.

3.

4.

5.

Part D: Analysis of learners’ errors


Analysis of each of the distractors, in order, from the one selected most often to the one
selected least often. You may trial these with your learners – please report on this if you
do so.

For each distractor, think about what learners might have done to obtain the answer
in each incorrect choice:
For choice – %
The way(s) of thinking that the learners might have used in order to get this
particular incorrect choice using this particular method (fill in more than one
possible way of thinking, if necessary) ie. Possible reasons for errors/Possible
misconceptions
1.

2.

3.

4.
For choice – %
The way(s) of thinking that the learners might have used in order to get this
particular incorrect choice using this particular method (fill in more than one
possible way of thinking, if necessary) ie. Possible reasons for errors/Possible
misconceptions
1.

131
2.

3.

4.
For choice – %
The way(s) of thinking that the learners might have used in order to get this
particular incorrect choice using this particular method (fill in more than one
possible way of thinking, if necessary) ie. Possible reasons for errors/Possible
misconceptions
1.

2.

3.

4.

Other issues with task not discussed above


Issues to consider when teaching (for example, how to over come some of the
problems that have been raised.)

132
Appendix 3
Template for DIPIP error analysis of own test task analysis (In Round 2)
(2010 error analysis – Grades 3,4,5,6,7,9)

Part A: Details
Grade
Question: Please
transcribe the full
question here
Content/Description:
Please describe the
mathematical
content of the
question and any
other relevant detail.
Area/LO

Part B: Learner responses


Anticipated achievement Expectation:

Sample number (How many tests do you


have? Count and record this number.
You will use all of your tests every time
in your analysis)
Number of learners in sample that got
the item correct (count the number of
learners in your sample who got the
item perfectly correct)
Number of learners in the sample that
got the item partially correct (count the
number of learners in your sample who
did some partially correct working but
did not complete the item correctly)
Number of learners in the sample that
got the item wrong (count the number of
learners in the test who were awarded a
133
“zero” for the item)
Difficulty (work out the percentage of
the sample who got the item right. You
will do this for each item. Then place
them on a relative scale for your test of
easiest to hardest item.)

Part C: Analysis of correct answer


Go through your sample of learner tests to find as many different ways of getting the
question correct. Then record these ways and write about the way(s) of thinking that the
learner might use in order to get the question correct (fill in more than one possible way
of thinking, if necessary): USE YOUR LEANER TESTS to generate this list.

Indicate for each method whether you think it is at the expected level for the grade OR
higher OR lower than the group would expect.
1.

2.

3.

4.

5.

Part D: Analysis of learners’ errors


Go through your sample of learner tests to find as many different ways of getting the
question incorrect. Then record these ways and write about the way(s) of thinking that the
learner might use in order to get the question incorrect, i.e. Possible reasons for
errors/Possible misconceptions (fill in more than one possible way of thinking, if
necessary): USE YOUR LEANER TESTS to generate this list.
1.

2.

3.

134
4.

5.

Other issues with task not discussed above

Issues to consider when teaching (for example, how to overcome some of the
problems that have been raised.)

135
Appendix 4
Error analysis coding template – final version (exemplar with one Grade 3 items)

Grade Ite Maths Distr text Text Proc Conc Awa Diag Use Disti Comments
m content actor no edur eptu renes nosti of nct
area al al s of c ever expla
error reaso yday natio
ning n

each item of text from template Categ. Categ. Categ. Categ. F/N

Grade 1 Numbe C 1 They counted the books on the


3 r shelf. To do this they must know
how to count correctly, be able to
identify a book as a unit and they
must distinguish between the
ends of a shelf and a book so that
they do not mistake the ends of a
shelf as another book.

2 They looked at shelf and saw 5


books on the left and 2 on the
right and added them together.

3 Counted every book that had a


label on it.

Grade 1 Numbe B 1 Subtracted the books that was


3 r standing up straight on the left
and right and counted the three
books that are slanting in the
middle. They mistook the books

136
on the left and right as part of the
shelf.

2 Focused on the three thicker


books. The thicker books are more
visible.

137
Appendix 5
Error analysis – Template criteria for coding

Criteria Category descriptors: Error analysis

Explanations of the correct Full Partial Inaccurate Not Present


solution

Procedural Teachers’ explanation Teachers’ explanation of Teachers’ explanation No


of the learners’ the learners’ of the learners’ mathematica
mathematical reasoning mathematical reasoning mathematical reasoning l procedural
The emphasis of this code is behind the solution behind the solution behind the solution explanation
on the teachers’ procedural includes demonstration includes demonstration includes demonstration is given
explanation of the solution. of procedure. of procedure. of procedure.
Teaching mathematics
involves a great deal of
procedural explanation The procedure is The procedure is accurate Teacher’s use of
which should be done fully accurate and includes but it does not include all procedure is inaccurate
and accurately for the all of the steps in the of the steps in the or incomplete to the
learners to grasp and procedure. procedure. extent that it could be
become competent in confusing.
working with the
procedures themselves.

Conceptual Teachers’ explanation Teachers’ explanation of Teachers’ explanation No


of the learners’ the learners’ of the learners’ conceptual
mathematical reasoning mathematical reasoning mathematical reasoning links are
The emphasis of this code is behind the solution behind the solution behind the solution made in the
on the teachers’ conceptual includes conceptual includes conceptual includes conceptual explanation.
explanation of the links. links. links.
procedure/other reasoning
followed in the solution.
Mathematical reasoning The explanation The explanation includes The explanation
(procedural/other) needs to illuminates some but not all of the includes incorrect or
be unpacked and linked to conceptually the conceptual links which poorly conceived
the concepts to which it background and illuminate the conceptual links and
relates in order for learners process of the activity. background and process thus is potentially
to understand the of the activity. confusing.
mathematics embedded in
the activity.

Explanations of the Full Partial Inaccurate Not present


incorrect solution (error in
distractor)

Awareness of error Teachers explain the Teachers explain the Teachers explain the No
mathematical error mathematical error made mathematical error explanation
made by the learner. by the learner. made by the learner. is given of
The emphasis of this code is the
on teachers’ explanation of mathematica
the actual mathematical The explanation of the The explanation of the The explanation of the l of the
error and not on learners’ particular error is particular error is particular error is particular

138
reasoning. mathematically sound mathematically sound mathematically error.
and suggest links to but does not link to inaccurate.
common common misconceptions
misconceptions or or errors
errors

Diagnostic reasoning Teachers describe Teachers describe Teachers describe No attempt


learners’ mathematical learners’ mathematical learners’ mathematical is made to
reasoning behind the reasoning behind the reasoning behind the describe
The idea of error analysis error. error. error. learners’
goes beyond identifying a mathematica
common error and l reasoning
misconception. The idea is It describes the steps of The description of the The description of the behind the
to understand the way learners’ mathematical learners’ mathematical learners’ mathematical particular
teachers go beyond the reasoning reasoning is incomplete reasoning does not error
actual error to try and systematically and although it does hone in hone in on the
follow the way the learners hones in on the on the particular error. particular error.
were reasoning when they particular error.
made the error. The
emphasis of this code is on
teachers’ attempt to
provide rationale for how
learners were reasoning
mathematically when they
chose the distractor.

Use of everyday Teachers’ explanation Teachers’ explanation of Teachers’ explanation No


knowledge of the learners’ the learners’ of the learners’ discussion of
mathematical reasoning mathematical reasoning mathematical reasoning everyday is
behind the error behind the error appeals behind the error done.
Teachers often explain why appeals to the to the everyday. appeals to the
learners make an error by everyday. everyday.
appealing to everyday
experiences that learners Teacher’s use of the
draw on and confuse with Teachers’ use of the ‘everyday’ is relevant but Teacher’s use of the
the mathematical context of ‘everyday’ enables does not properly explain ‘everyday’ dominates
the question. The emphasis mathematical the link to mathematical and obscures the
of this code is on the quality understanding by understanding mathematical
of the use of everyday, making the link understanding, no link
judged by the links made to between the everyday to mathematical
the mathematical and the mathematical understanding is made
understanding teachers try clear
to advance.

Multiple explanations of Multiple mathematical Multiple mathematical Multiple mathematical No


error explanations are and general explanations and general mathematica
provided. are provided explanations are lly
One of the challenges in
provided feasible/conv
error analysis is for learners
incing
to hear more than one
All of the explanations At least two of the explanation
explanation of the error.
(two or more) are mathematical One mathematically provided
This is because some
mathematically explanations feasible/convincing
explanations are more
feasible/convincing feasible/convincing explanation provided
accurate or more accessible
than others. This code
139
examines the teachers’
explanation(s) of the error
itself rather than the
explanation of learners’
reasoning.

This is coded F/N (mathematically feasible/not) for each new and different explanation offered
by the teacher. The final code is assigned according to the level descriptors above.

140
Appendix 6
Procedural explanations

The literature emphasises that the quality of teachers’ explanations depends on the
balance they achieve between explaining the procedure required for addressing a
mathematical question and the mathematical concepts underlying the procedure. This
criterion aims to grade the quality of the teachers’ procedural explanations of the correct
answer. The emphasis in the criterion is on the quality of the teachers’ procedural
explanations when discussing the solution to a mathematical problem through engaging
with learner test data. Teaching mathematics involves a great deal of procedural
explanation which should be done fully and accurately for the learners to grasp and
become competent in working with the procedures themselves. The four categories,
which capture the quality of the procedural explanations demonstrated by a
teacher/group, are presented in Table A1 below.

Table A1: Category descriptors for “procedural explanations”

Procedural Full Partial Inaccurate Not


present

The emphasis of this Teachers’ Teachers’ Teachers’ No


code is on the teachers’ explanation of the explanation of the explanation of the procedural
procedural explanation learners’ learners’ learners’ explanation
of the solution. Teaching mathematical mathematical mathematical is given
mathematics involves a reasoning behind the reasoning behind reasoning behind
great deal of procedural solution includes the solution the solution
explanation which demonstration of includes includes
should be done fully and procedure. demonstration of demonstration of
accurately for the procedure. procedure.
learners to grasp and
become competent in The procedure is
working with the accurate and The procedure is Teacher’s use of
procedures themselves. includes all of the accurate but it does procedure is
key steps in the not include all of inaccurate and
procedure. the key steps in the thus shows lack
procedure. of understanding
of the procedure.

Exemplars of coded texts of correct answer texts for procedural explanations:

141
The first set of exemplars relates to a single ICAS item to show the vertical
differentiation of the correct answer codes in relation to one mathematical concept under
discussion. These exemplar texts present explanations that received the same level
coding on the two criteria of procedural and conceptual explanations (see Table A5).
This would not always be the case. We have chosen to present such exemplars because
of the high correlation between procedural and conceptual explanations.

Table A2: Criterion 1 – Procedural explanation of the choice of the correct solution in relation
to one item
Test Item Criterion wording

(Grade 8 item 8 ICAS 2006)

Procedural

Which row contains only square


numbers?
The emphasis of this code is on the teachers’ procedural
explanation of the solution. Teaching mathematics involves a
great deal of procedural explanation which should be done
(A) 2 4 8 16
fully and accurately for the learners to grasp and become
(B) 4 16 32 64 competent in working with the procedures themselves.

(C) 4 16 36 64

(D) 16 36 64 96

Selection – correct answer - C

Text Code Category descriptor

1² = 1; 2² = 4; 3² = 9; 4² = 16; 5² Full Teachers’ explanation of the learners’ mathematical reasoning


= 25; 6² = 36 ; 7² = 49; 8² = 64; behind the solution includes demonstration of procedure.
Therefore the row with 4;
16; 36; 64 only has square
numbers. To get this right The procedure is accurate and includes all of the key steps in
they need to know what the procedure.
“square numbers” mean
and to be able to calculate or
recognize which of the rows
consists only of square
numbers.

Learners would illustrate Partial Teachers’ explanation of the learners’ mathematical reasoning
squares to choose the right behind the solution includes demonstration of procedure.
row.

The procedure is accurate but it does not include all of the key
steps in the procedure.

Learners could calculate the Inaccurate Teachers’ explanation of the learners’ mathematical reasoning

142
square roots of all the behind the solution includes demonstration of procedure.
combinations in order to
discover the correct one. To
get this learners need to Teacher’s use of procedure is inaccurate and thus shows lack of
know how to use the square understanding of the procedure.
root operation.

Learners understood the Not No procedural explanation is given


question and chose the right present
row.

The next set of exemplars relate to various ICAS item to show the differentiation of the
correct answer codes in relation to a spread of mathematical concepts.

Table A3: Criterion 1 – Procedural explanation of the choice of the correct solution in relation
to a range of items

Criterion wording Procedural


The emphasis of this code is on the teachers’
procedural explanation of the solution.
Teaching mathematics involves a great deal
of procedural explanation which should be
done fully and accurately for the learners to
grasp and become competent in working
with the procedures themselves.

Item Category Text: Full explanation


descriptor

ICAS 2007 Grade 9 Item 24 Teachers’ Learners will take the 4 800
explanation of revolutions and divide it
Selected correct answer – C
the learners’ by 60 to convert the
Content area: Geometry mathematical answer to seconds and
reasoning obtain an answer of 80
behind the they then take this answer
solution and multiply it by 360 for
includes the number of degrees in a
demonstration circle to obtain the correct
of procedure. answer of 28 800.

The procedure
is accurate and
includes all of
the key steps in
the procedure.

ICAS 2006 Grade 7 Item 3 Teachers’ Text: Partial explanation

143
Selected correct answer – A explanation of Learners used the division
the learners’ line correctly inside the
Content area: Number
mathematical circle and counted the
reasoning pieces correctly. They fully
behind the understood that this
solution division in a circle happens
includes from the centre, and that
demonstration thirds mean that there are
of procedure. three equal parts.

The procedure
is accurate but
it does not
include all of
the key steps in
the procedure.

ICAS 2007 Grade 6 Item 8 Teachers’ Text: Inaccurate


explanation of explanation
Selected correct answer – B
the learners’
1,3 + 1,3 = 2,6 +1,3 = 3,9 +1,3
Content area: Measurement mathematical
= 5,2 +1,3 = 6,5 +1,3 =7,8
reasoning
+1,3 = 9,1 + 1,3 = 10,4 (that’s
behind the
the closest )
solution
includes
demonstration
of procedure.

Teacher’s use of
procedure is
inaccurate and
thus shows lack
of
understanding
of the
procedure.

ICAS 2006 Grade 5 Item 12 No procedural Text: Mathematical


explanation is explanation not present
Selected correct answer – B
given
Learners managed to get
Content area: Data Handling
the selected distractor by
focusing on the key

144
145
Appendix 7
Conceptual explanations

The emphasis in this criterion is on the conceptual links made by the teachers in their
explanations of the learners’ mathematical reasoning in relation to the correct answer.
Mathematical procedures need to be unpacked and linked to the concepts to which they
relate in order for learners to understand the mathematics embedded in the procedure.
The emphasis of the criterion is on the quality of the teachers’ conceptual links made in
their explanations when discussing the solution to a mathematical problem through
engaging with learner test data. The four level descriptors for this criterion, which
capture the quality of the conceptual explanations demonstrated by a teacher/group, are
presented in Table A4 below.

Table A4: Category descriptors for “conceptual explanations”

Conceptual Full Partial Inaccurate Not


present

The emphasis of this Teachers’ Teachers’ Teachers’ No


code is on the teachers’ explanation of the explanation of the explanation of the conceptual
conceptual learners’ learners’ learners’ links are
explanation of the mathematical mathematical mathematical made in the
procedure/other reasoning behind reasoning behind the reasoning behind explanation
reasoning followed in the solution solution includes the solution .
the solution. includes conceptual conceptual links. includes
Mathematical links. conceptual links.
reasoning
(procedural/other) The explanation
needs to be unpacked The explanation includes some but The explanation
and linked to the illuminates not all of the key includes poorly
concepts to which it conceptually the conceptual links conceived
relates in order for background and which illuminate the conceptual links
learners to understand process of the background and and thus is
the mathematics activity. process of the potentially
embedded in the activity. confusing.
activity.

Exemplars of coded correct answer texts for conceptual explanations


The first set of exemplars relates to a single ICAS item to show the vertical
differentiation of the correct answer codes in relation to one mathematical concept under
discussion. These exemplar texts present explanations that received the same level

146
coding on the two criteria of procedural (see Table A2) and conceptual explanations.
This would not always be the case. We have chosen to present such exemplars because
of the high correlation between procedural and conceptual explanations.

Table A5: Criterion 2 – Conceptual explanation of the choice of the correct solution in relation
to one item
Test Item Criterion wording

(Grade 8 item 8 ICAS 2006)

Which row contains only square Conceptual


numbers?

The emphasis of this code is on the teachers’ conceptual


(a) 2 4 8 16 explanation of the procedure/other reasoning followed in the
solution. Mathematical reasoning (procedural/other) needs to
(b) 4 16 32 64
be unpacked and linked to the concepts to which it relates in
(c) 4 16 36 64 order for learners to understand the mathematics embedded in
the activity.
(d) 16 36 64 96

Selection – correct answer - C

Text: Number Code Category descriptor

1² = 1; 2² = 4; 3² = 9; 4² = 16; 5² Full Teachers’ explanation of the learners’ mathematical reasoning


= 25; 6² = 36 ; 7² = 49; 8² = 64; behind the solution includes conceptual links.
Therefore the row with 4;
16; 36; 64 only has square
numbers. To get this right The explanation illuminates conceptually the background and
they need to know what process of the activity.
“square numbers” mean
and to be able to calculate or
recognize which of the rows
consists only of square
numbers.

Learners would illustrate Partial Teachers’ explanation of the learners’ mathematical reasoning
squares to choose the right behind the solution includes conceptual links.
row.

The explanation includes some but not all of the key conceptual
links which illuminate the background and process of the
activity.

Learners could calculate the Inaccurate Teachers’ explanation of the learners’ mathematical reasoning
square roots of all the behind the solution includes conceptual links.
combinations in order to
discover the correct one. To
get this learners need to

147
know how to use the square The explanation includes poorly conceived conceptual links and
root operation. thus is potentially confusing.

Learners understood the Not No conceptual links are made in the explanation.
question and chose the right present
row.

Table A6: Criterion 2 – Conceptual explanation of the choice of the correct solution in relation
to a range of items

Criterion Conceptual
wording
The emphasis of this code is on the teachers’ conceptual explanation of the
procedure/other reasoning followed in the solution. Mathematical reasoning
(procedural/other) needs to be unpacked and linked to the concepts to which
it relates in order for learners to understand the mathematics embedded in
the activity.

Item Level descriptor

“Own test” Grade 4 Item Teachers’ explanation of the learners’ mathematical


reasoning behind the solution includes conceptual links.
Content area: Measurement
The explanation illuminates conceptually the
B) Mpho has 2 l of milk. How many
background and process of the activity.
cups of 100ml can he fill ? __________
(2)

Text: Full explanation

Learner wrote
100 100 100 100 100
1 = 500ml
2
3 4 5 1000ml =1l
100 100 100 100 100
= 500ml
6 7 8
9 10
= 500ml
100 100 100 100 100
1000ml =1l
11 12 13 14 15
= 500ml
16 100 100 18 100 19 100 20 100
17

Learner then counted the number of groups of 100 to calculate how many
cups and got 20
ICAS 2006 Grade 5 Item 30 Teachers’ Text: Partial explanation
explanation of the
Selected correct answer – B
148
Content area: Pattern and Algebra learners’ Sequence of counting numbers
mathematical were used matching them with
reasoning behind odd numbers, so that for every
the solution blue block there are two pink
includes conceptual blocks added.
links.

The explanation
includes some but
not all of the key
conceptual links
which illuminate the
background and
process of the
activity.

ICAS 2007 Grade 9 Item 8 Teachers’ Text: Inaccurate explanation


explanation of the
Selected correct answer – D The learner will use the
learners’
information that 6 slices of pizza
Content area: Number mathematical
make a whole and therefore the
reasoning behind
denominator for this fraction is 6.
the solution
They will then take the 16 slices
includes conceptual
eaten and make that the
links.
numerator and hence ascertain
how many whole pizzas had
been eaten.
The explanation
includes poorly
conceived
conceptual links and
thus is potentially
confusing.

ICAS 2006 Grade 3 Item 14 No conceptual links Text: Mathematical explanation


are made in the not present
Selected correct answer – D
explanation.
Learners are attracted to yellow
Content area: Data handling
as the brightest colour. Young
learners often attracted to the
brightest colour. Learners didn’t
understand the question.

149
150
Appendix 8
Awareness of error

The emphasis in this criterion is on the teachers’ explanations of the actual mathematical
error (and not on the learners’ reasoning). The emphasis in the criterion is on the
mathematical quality of the teachers’ explanations of the actual mathematical error
when discussing the solution to a mathematical problem. The four level descriptors for
this criterion, which capture the quality of the awareness of error demonstrated by a
teacher/group, are presented in Table A7 below.

Table A7: Category descriptors for “awareness of mathematical error”

Awareness of error Full Partial Inaccurate Not


present

The emphasis of this Teachers explain the Teachers explain the Teachers explain No
code is on teachers’ mathematical error mathematical error the mathematical mathematic
explanation of the made by the learner. made by the learner. error made by the al
actual mathematical learner. explanation
error and not on is given of
learners’ reasoning. The explanation of The explanation of the
the particular error is the particular error is The explanation of particular
mathematically mathematically the particular error error.
sound and suggests sound but does not is mathematically
links to common link to common inaccurate or
misconceptions or misconceptions or incomplete and
errors. errors. hence potentially
confusing.

Exemplars of error answer texts for awareness of error criterion:


As with the exemplars for procedural explanations, the first set of exemplars relates to a
single ICAS item to show the vertical differentiation of the correct answer codes in
relation to one mathematical concept under discussion. These exemplar texts present
explanations that received the same level coding on the two criteria of awareness of the
error and diagnostic reasoning (see Table A11). This would not always be the case. We
have chosen to present such exemplars because of the high correlation between
awareness of the error and diagnostic reasoning.

Table A8: Criterion 3 – Awareness of the error embedded in the incorrect solution in relation
to one item
151
Test Item Criterion wording

(Grade 7 item 26 ICAS 2006)

Awareness of error

The emphasis of this code is on teachers’ explanation of the


actual mathematical error and not on learners’ reasoning.

Selection – incorrect answer - B

Text Code Category descriptor

The pupils started from the Full Teachers explain the mathematical error made by the learner.
highlighted 3 and then
counted in groups of 3 and
then just added 1 because the The explanation of the particular error is mathematically
next box being 7 was sound and suggest links to common misconceptions or errors
highlighted. They then
continued this pattern
throughout and found that it
worked well. The pattern
worked and so they assumed
that this was the correct
answer but ignored the word
multiples. They just counted
in three’s (plus 1) to get the
answer.

Perhaps the pupils looked at Partial Teachers explain the mathematical error made by the learner.
the 3 and the 1 in the answer,
decided that 3 + 1 = 4 and
thus started from the The explanation of the particular error is mathematically
highlighted 3 and then sound but does not link to common misconceptions or errors
counted in four’s and thus
got the numbers of 3; 11; 15;
19; etc.

The pupils have a poor Inaccurate Teachers explain the mathematical error made by the learner.
understanding of the word
multiple, as they just counted
in groups of three’s and not The explanation of the particular error is mathematically

152
in multiples of three’s. inaccurate.

The pupils did not work out Not No mathematical explanation is given of the particular error.
the actual sum, they just read present
the question, it looked
similar to the answer and
thus chose (multiples of 3) +
1.

Table A9: Criterion 3 – Awareness of the error embedded in the incorrect solution in relation
to a range of items

Criterion wording Awareness of error


The emphasis of this code is on teachers’ explanation of the
actual mathematical error and not on learners’ reasoning.

Item Level descriptor Text: Full explanation

ICAS 2007 Grade 5 Item 17 Teachers explain the After the learner got the white
mathematical error square area which was 64 he/she did
Selected distractor – D
made by the learner. not subtract the dark square area
Content area: Measurement from the area of the white square.
This is another halfway step to
The explanation of finding the answer.
the particular error is
mathematically
sound and suggest
links to common
misconceptions or
errors

ICAS 2006 Grade 6 Item 5 Teachers explain the Text: Partial explanation
mathematical error
Selected distractor – C Subtracted or added the 42 and the 6
made by the learner.
because they misinterpreted the
Content area: Number
division sign.
The explanation of
the particular error is
mathematically
sound but does not
link to common
misconceptions or
errors

ICAS 2007 Grade 4 Item 15 Teachers explain the Text: Inaccurate explanation
mathematical error
153
Selected distractor – C made by the learner. The learners might have considered
the last digit of the product (37) and
Content area: Number
decided to choose this answer
The explanation of because it has 7x7.
the particular error is
mathematically
inaccurate or
incomplete and hence
potentially confusing.

ICAS 2006 Grade 9 Item 20 No mathematical Text: Mathematical explanation not


explanation is given present
Selected distractor – A
of the particular
They did not answer the question,
Content area: Measurement error.
yet chose to answer anther: How
deep is the anchor under water?

154
Appendix 9
Diagnostic Reasoning

The idea of error analysis goes beyond identifying the actual mathematical error. The
idea is to understand how teachers go beyond the mathematical error and follow the
way learners may have been reasoning when they made the error. The emphasis in the
criterion is on the quality of the teachers’ attempt to provide a rationale for how learners
were reasoning mathematically when they chose a distractor. The four level descriptors
for this criterion, which capture the quality of the diagnostic reasoning demonstrated by
a teacher/group, are presented in Table A10 below.

Table A10: Category descriptors for “diagnostic reasoning”

Diagnostic reasoning Full Partial Inaccurate Not present

The idea of error Teachers describe Teachers describe Teachers No attempt is


analysis goes beyond learners’ learners’ describe made to
identifying a common mathematical mathematical learners’ describe
error and reasoning behind reasoning behind mathematical learners’
misconception. The the error. the error. reasoning mathematical
idea is to understand behind the reasoning
the way teachers go error. behind the
beyond the actual error The description of The description of particular
to try and follow the the steps of the learners’ error
way the learners were learners’ mathematical The description
reasoning when they mathematical reasoning is of the learners’
made the error. The reasoning is incomplete mathematical
emphasis of this code is systematic and although it does reasoning does
on teachers’ attempt to hones in on the hone in on the not hone in on
provide rationale for particular error. particular error. the particular
how learners were error.
reasoning
mathematically when
they chose the
distractor.

Exemplars of coded error answer texts for diagnostic reasoning:


As with the exemplars for procedural explanations, the first set of exemplars relates to a
single ICAS item to show the vertical differentiation of the correct answer codes in
relation to one mathematical concept under discussion. These exemplar texts present
explanations that received the same level coding on the two criteria of awareness of the
error (see Table A8) and diagnostic reasoning. This would not always be the case. We
155
have chosen to present such exemplars because of the high correlation between
awareness of the error and diagnostic reasoning.

Table A11: Criterion 4 – Diagnostic reasoning of learner when selecting the incorrect solution
in relation to one item
Test Item Criterion wording

(Grade 7 item 26 ICAS 2006)

Diagnostic reasoning

The idea of error analysis goes beyond identifying a common


error and misconception. The idea is to understand the way
teachers go beyond the actual error to try and follow the way
the learners were reasoning when they made the error. The
emphasis of this code is on teachers’ attempt to provide
rationale for how learners were reasoning mathematically
when they chose the distractor.

Selection – incorrect answer - B

Text Code Category descriptor

The pupils started from the Full Teachers describe learners’ mathematical reasoning behind the
highlighted 3 and then error.
counted in groups of 3 and
then just added 1 because the
next box being 7 was It describes the steps of learners’ mathematical reasoning
highlighted. They then systematically and hones in on the particular error.
continued this pattern
throughout and found that it
worked well. The pattern
worked and so they assumed
that this was the correct
answer but ignored the word
multiples. They just counted
in three’s (plus 1) to get the
answer.

Perhaps the pupils looked at Partial Teachers describe learners’ mathematical reasoning behind the
the 3 and the 1 in the answer, error.
decided that 3 + 1 = 4 and

156
thus started from the
highlighted 3 and then
The description of the learners’ mathematical reasoning is
counted in four’s and thus
incomplete although it does hone in on the particular error.
got the numbers of 3 – 11 –
15 – 19 – etc…

The pupils have a poor Inaccurate Teachers describe learners’ mathematical reasoning behind the
understanding of the word error.
multiple, as they just counted
in groups of three’s and not
in multiples of three’s. The description of the learners’ mathematical reasoning does
not hone in on the particular error.

The pupils did not work out Not No attempt is made to describe learners’ mathematical
the actual sum, they just read present reasoning behind the particular error
the question, it looked
similar to the answer and
thus chose (multiples of 3) +
1.

Table A12: Criterion 4 – Diagnostic reasoning of learner when selecting the incorrect solution
in relation to a range of items

Criterion wording Diagnostic reasoning


The idea of error analysis goes beyond identifying a
common error and misconception. The idea is to
understand the way teachers go beyond the actual
error to try and follow the way the learners were
reasoning when they made the error. The emphasis of
this code is on teachers’ attempt to provide rationale
for how learners were reasoning mathematically
when they chose the distractor.

Item Category Text: Full explanation


descriptor
Learners started at the cross
ICAS 2007 Grade 3 Item 16 Teachers describe moved to Green Forest South
learners’ (down) and then must have
Selected distractor – C
mathematical moved East and not West, and
Content area: Measurement reasoning behind therefore ended at white beach
the error. because they got did not know
or confused with direction
between East/West. Learners
It describes the may have moved towards the
steps of learners’ "W" for West on the drawing of
mathematical the compass which is given on
reasoning the map. This compass
systematically and drawing might also have
hones in on the confused them and made them
particular error. go to the east when they
157
should have gone west, though
they did go "down" for South
initially in order to end up at
White Beach.

ICAS 2006 Grade 4 Item 20 Teachers describe Text: Partial explanation


learners’
Selected distractor – C This looks most like tiles we
mathematical
see on bathroom walls, even
Content area: Geometry reasoning behind
though it is a parallelogram, so
the error.
even though it is NOT
rectangular, learners selected
this as the correct answer. They
The description of
did not realize that this shape
the learners’
would go over the edges or
mathematical
create gaps at the edges if it
reasoning is
was used.
incomplete
although it does
hone in on the
particular error.

ICAS 2007 Grade 5 Item 14 Teachers describe Text: Inaccurate explanation


learners’
Selected distractor – B The distractor selected most
mathematical
was B. The likelihood is that
Content area: Measurement reasoning behind
the learners could choose 600
the error.
because they could have seen
six years in the question.

158
The description of
the learners’
mathematical
reasoning does not
hone in on the
particular error.

ICAS 2006 Grade 6 Item 7 No attempt is made Text: Mathematical


to describe explanation not present
Selected distractor – C
learners’
Not thinking systematically to
Content area: Number mathematical
devise a correct formula to
reasoning behind
obtain the number of cats. This
the particular error
learner is perhaps feeling
pressurised to produce a
formula describing the
situation, rather than manually
grouping the number of legs
for each cat. There is a question
here of whether the learner
understands the meaning of
the numbers in the formula
s/he has used to calculate the
number of cats.

159
Appendix 10
Use of the everyday

Teachers often explain why learners make mathematical errors by appealing to everyday
experiences that learners draw on and confuse with the mathematical context of the
question. The emphasis in this criterion is on the quality of the use of everyday
knowledge, judged by the links made to the mathematical understanding that the
teachers attempt to advance. The four level descriptors for this criterion, which capture
the quality of the use of everyday knowledge demonstrated by a teacher/group, are
presented in Table A13 below.

Table A13: Category descriptors for “use of everyday”

Use of everyday Full Partial Inaccurate Not


knowledge present

Teachers often Teachers’ explanation Teachers’ Teachers’ No


explain why learners of the learners’ explanation of the explanation of the discussio
make an error by mathematical learners’ learners’ n of
appealing to reasoning behind the mathematical mathematical everyday
everyday experiences error appeals to the reasoning behind the reasoning behind is done
that learners draw on everyday. error appeals to the the error appeals to
and confuse with the everyday. the everyday.
mathematical context
of the question. The Teachers’ use of the
emphasis of this code ‘everyday’ enables Teacher’s use of the Teacher’s use of the
is on the quality of mathematical ‘everyday’ is relevant ‘everyday’
the use of everyday, understanding by but does not dominates and
judged by the links making the link properly explain the obscures the
made to the between the everyday link to mathematical mathematical
mathematical and the mathematical understanding understanding, no
understanding clear link to
teachers try to mathematical
advance understanding is
made

“Use of the everyday” was used to measure the extent to which teachers used everyday
contexts to enlighten learners about mathematical concepts. In the exemplar tables, a
variety of different items where teachers used everyday contexts to add to their
mathematical explanation of the reasoning behind an error are given.

160
The code “no discussion of the everyday” includes all explanations which did not make
reference to the everyday whether it was appropriate or not. Further more detailed
analysis of the kinds of contextualized explanations which were offered and whether
they were missing when they should have been present could be carried out since it was
not within the scope of this report.

Exemplars of coded error answer texts for everyday criterion

Table A14: Use of the everyday exemplars

Criterion wording Use of everyday knowledge


Teachers often explain why learners make an error by
appealing to everyday experiences that learners draw on
and confuse with the mathematical context of the
question. The emphasis of this code is on the quality of
the use of everyday, judged by the links made to the
mathematical understanding teachers try to advance.

Item Category descriptor Text: Full explanation

ICAS 2006 Grade 9 Item 6 Teachers’ He draws on his frame of


explanation of the reference of how he perceives a
Selected distractor – D
learners’ litre to be e.g. a 1,25l of cold drink
Content area: Measurement mathematical of a 1l of milk or a 2l of coke etc.
reasoning behind
the error appeals to
the everyday.

Teachers’ use of the


‘everyday’ enables
mathematical
understanding by
making the link
between the
everyday and the
mathematical clear

ICAS 2006 Grade 7 Item 9 Teachers’ Text: Partial explanation


explanation of the
Selected distractor – A Here learners did not read the
learners’
question attentively and
Content area: Data Handing mathematical
misinterpreted or expected that
reasoning behind
the most popular sport would be
the error appeals to
the question – this is football
the everyday.
which has a frequency of 7 (the

161
highest). This could be the case
because of the way in which
Teacher’s use of the
teachers use these bar graphs in
‘everyday’ is
class. The most popular choice of
relevant but does
question is always the question
not properly explain
about the most or least popular
the link to
choice as indicated on the graph.
mathematical
understanding

ICAS 2007 Grade 3 Item 20 Teachers’ Text: Inaccurate explanation


explanation of the
Selected distractor – B In the last option the circle/ ball
learners’
managed to pull the cylinder.
Content area: Measurement mathematical
reasoning behind
the error appeals to
the everyday.

Teacher’s use of the


‘everyday’
dominates and
obscures the
mathematical
understanding, no
link to mathematical
understanding is
made

ICAS 2006 Grade 5 Item 11 No discussion of Text: Mathematical explanation


everyday is done. not present
Selected distractor – B
Some could have guessed just to
Content area: Geometry
have an answer.

162
163
Appendix 11
Multiple explanations

One of the challenges in the teaching of mathematics is that learners need to hear more
than one explanation of the error. This is because some explanations are more accurate
or more accessible than others and errors need to be explained in different ways for
different learners. This criterion examines the teachers’ ability to offer alternative
explanations of the error when they are engaging with learners’ errors through analysis
of learner test data. The four level descriptors for this criterion, which capture the
quality of the multiple explanations of error demonstrated by the group, are presented
in Table A15 below.

Table A15: Category descriptors for “multiple explanations”

Multiple explanations Full Partial Inaccurate Not present


of error

One of the challenges Multiple Multiple Multiple No


in error analysis is for mathematical mathematical and mathematical mathematicall
learners to hear more explanations are general and general y
than one explanation of provided. explanations are explanations are feasible/convin
the error. This is provided provided cing
because some explanation
explanations are more All of the provided
accurate or more explanations (two or At least two of the One
accessible than others. more) are mathematical mathematically
This code examines mathematically explanations are feasible/convinci
the teachers’ feasible/convincing feasible/convincing ng explanation
explanation(s) of the provided
(combined with
error itself rather than (with/without
general
the explanation of general
explanations)
learners’ reasoning. explanations)

“Multiple explanations” was used to evaluate the range of explanations given by groups
of learners’ thinking behind the incorrect answer. This was looking to see whether
teachers offer a number of different explanations in relation to the concept. For multiple
explanations we again give a variety of different items where teachers gave more than
one mathematically feasible explanation of the reasoning behind an error.

Exemplars of coded error answer texts for multiple explanations:

164
Table A16: Multiple explanations exemplars

Criterion wording Multiple explanations of error


One of the challenges in error analysis is for learners to
hear more than one explanation of the error. This is
because some explanations are more accurate or more
accessible than others. This code examines the teachers’
explanation(s) of the error itself rather than the
explanation of learners’ reasoning.

Item Category descriptor Text: Full explanation

ICAS 2007 Grade 7 Item 14 Multiple 1. Learners said 6 x 3 = 18.


mathematical They took the length
Selected distractor – B
explanations are given and multiplied be
Content area: Measurement provided. the three sides of the
triangle.
2. They confused the radius
All of the and diameter. They
explanations (two or halved 6cm to get 3cm
more) are and then multiplied by 6,
mathematically because the triangle is
feasible/convincing made up of six radii.

ICAS 2006 Grade 3 Item 4 Multiple Text: Partial explanation


mathematical and
Selected distractor – A 1. The learners did not
general explanations
focus on the handle, but
Content area: Geometry are provided
on the splayed out
bristles or on the broom
as a whole. See the
At least two of the
bristles as the most
mathematical
important part of a
explanations
broom.
feasible/convincing
2. Did not read or
understand the question.
3. The learners focused on
the broom as a whole.

165
Because learners have
difficulty separating the
handle from the broom in
their minds eye

ICAS 2007 Grade 6 Item 14 Multiple Text: Inaccurate explanation


mathematical and
Selected distractor – A If they read from 0 to 1000 there
general explanations
are 3 species of fish. They are
Content area: Data and Chance are provided
reading the bar not the line. (they
are distracted by the 2-D nature
of the diagram )
One mathematically
feasible/convincing
explanation
provided

ICAS 2006 Grade 4 Item 2 No mathematically Text: Mathematical explanation


feasible/convincing not present
Selected distractor –
explanation
1. They might be distracted by
Content area: Number provided
the word “playing” and
choose particular dogs
according to what they
identify as a dog that is
playing.
2. They might not identify all of

166
the animals as dogs, and
hence not count all of them.
3. We see these as issues of
interpretation, of the
language and of the diagram,
rather than mathematical
misconceptions.

167

You might also like