You are on page 1of 12

Assessment & Evaluation in Higher Education

ISSN: 0260-2938 (Print) 1469-297X (Online) Journal homepage: https://www.tandfonline.com/loi/caeh20

Designing for impact: a conceptual framework for


learning analytics as self-assessment tools

Roland Tormey, Cécile Hardebolle, Francisco Pinto & Patrick Jermann

To cite this article: Roland Tormey, Cécile Hardebolle, Francisco Pinto & Patrick Jermann
(2020) Designing for impact: a conceptual framework for learning analytics as self-
assessment tools, Assessment & Evaluation in Higher Education, 45:6, 901-911, DOI:
10.1080/02602938.2019.1680952

To link to this article: https://doi.org/10.1080/02602938.2019.1680952

Published online: 25 Nov 2019.

Submit your article to this journal

Article views: 738

View related articles

View Crossmark data

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=caeh20
ASSESSMENT & EVALUATION IN HIGHER EDUCATION
2020, VOL. 45, NO. 6, 901–911
https://doi.org/10.1080/02602938.2019.1680952

Designing for impact: a conceptual framework for learning


analytics as self-assessment tools
Roland Tormeya , Cecile Hardebollea, Francisco Pintob and Patrick Jermannb
a
Centre for Learning Sciences (LEARN) and Teaching Support Centre (CAPE), Ecole polytechnique federale
de Lausanne (EPFL), Lausanne, Switzerland; bCentre for Learning Sciences (LEARN) and Centre for Digital
Education (CEDE), Ecole polytechnique federale de Lausanne (EPFL), Lausanne, Switzerland

ABSTRACT KEYWORDS
Although it is frequently claimed that learning analytics can improve learning analytics; self-
self-evaluation and self-regulated learning by students, most learning regulated learning; social
analytics tools appear to have been developed as a response to existing practice; feedback; dialogue
data rather than with a clear pedagogical model. As a result there is little
evidence of impact on learning. Even fewer learning analytics tools seem
to be informed by an understanding of the social context and social
practices within which they would be used. As a result, there is very little
evidence that learning analytics tools are actually impacting on practice.
This paper draws on research in self-regulated learning and in the social
practices of learning and assessment to clarify a series of design issues
which should be considered by those seeking to develop learning ana-
lytics tools which are intended to improve student self-evaluation and
self-regulation. It presents a case study of how these design issues influ-
enced the development of a particular tool: the Learning Companion.

Introduction
There is a growing interest in the use of learning analytics and learning dashboards in higher
education, and significant claims are being made for the importance or potential utility of learn-
ing analytics (see e.g. Greller and Drachsler 2012, p. 43; Johnson, Adams Becker and Cummins,
2016; Ferguson et al., 2017). Despite this optimism there remain significant weaknesses to learn-
ing analytics research, with little empirical research so far being carried out on whether such
tools actually have an impact on learning and even less research being carried out on whether
they have impacted on the practice of students and teachers (Bodily and Verbert, 2017a;
Schwendimann et al., 2017). These empirical weaknesses are linked to a conceptual weakness:
few learning analytics studies make reference to any theoretical frame of reference and most
appear to be driven by the availability of particular types of data rather than by a pedagogical
concept (Jivet, Scheffel, Drachsler and Specht, 2017). The way in which the design of such tools
should be informed by concepts drawn from research on learning and on the practices of teach-
ers and learners, has hitherto been underexplored. The goal of this paper is to provide a concep-
tual framework for learning analytics-based student self-assessment tools, and to draw on that
framework to outline a series of design questions that should be considered when developing
such self-assessment tools.

CONTACT Roland Tormey roland.tormey@epfl.ch Ecole polytechnique federale de Lausanne (EPFL), Lausanne 1015,
Switzerland.
ß 2019 Informa UK Limited, trading as Taylor & Francis Group
902 R. TORMEY ET AL.

The first part of this paper sets the scene by exploring some of the weaknesses in the field of
learning analytics research at present. The subsequent two sections identify a series of questions
and issues that designers should consider when developing learning analytics tools. These sec-
tions do this by drawing on (i) research on self-regulated learning (SRL) and (ii) research on the
social practice of learning and assessment in higher education. Drawing on this conceptual
framework, the fourth section presents a case study of how these considerations were taken into
account in developing one learning analytics tool – the Learning Companion. It is not proposed
that the Learning Companion is the optimum design for a learning analytics tool – rather it is
intended as a case study of how the issues emergent from SRL and social practice research were
considered in the design process.
Existing research seems to suggest that despite the efforts being put into designing learning
analytics tools, many tools are not being used by teachers and students, and some are not
impacting on learning. This paper identifies a series of design questions which, if considered by
learning analytics developers, should increase the likelihood of having an impact on both learn-
ing and practice.

Learning analytics: knowns and unknowns


Learning analytics are defined as “the measurement, collection, analysis and reporting of data
about learners and their contexts, for purposes of understanding and optimising learning and
the environments in which it occurs” (Siemens and Gasevic, 2012, p. 1). It is often assumed that
learning analytics in effect means ‘big’ data and automated predictive algorithms (Wilson,
Watson, Thompson, Drew and Doyle, 2017) on ‘trace’ data that is unwittingly left behind by
learners (Verbert et al., 2014). It is important to note, however, that the definition of learning
analytics is broad and need not involve big data or predictive algorithms. Bodily and Verbert
(2017b), for example, found learning analytics tools varied in the data they used (which may
come from student information systems, learning management systems, sensors such as eye-
trackers, or from student self-reports), in the intended user (which included administrators, teach-
ers, student counsellors and students) in the methods and techniques used to process the data,
and in the output (which can include recommendations, ‘intelligent’ tutoring, or simply visual
feedback of data).
One active strand within learning analytics research is the development of tools that can aid
student learning through allowing them to reflect on feedback about their learning, often using
visual representations of data (called ‘dashboards’). This is a potentially important development
since there is good research evidence that appropriate feedback to students can have a strong
positive impact on their learning (e.g. Hattie and Timperley, 2007), in part, because it can play a
role in developing the students’ capacity for self-regulated learning, that is, their capacity to
plan, monitor and evaluate their own learning and generate for themselves thoughts, feelings
and actions that are guided by their own learning goals (Zimmerman, 2000). However, while
educational researchers are clear about the value of feedback and self-regulated learning, it
seems challenging to turn this into practice at university level at a time when assessment is
experienced as a marginalized and unvalued activity by faculty members (Tuck, 2012, p. 215)
and as disempowering by students (Carless, 2006). While lecturers may struggle to deliver such
feedback and support such student self-regulation at scale using traditional methods, such strat-
egies may be possible using learning analytics tools, particularly those which aim to ensure that
it is students (and teachers) that make learning decisions rather than having decisions made for
them by a supposedly intelligent system (Jermann and Dillenbourg, 2008; Dillenbourg
et al. 2011).
At least some learning analytics research is explicitly focused on the ways in which learning
analytics tools can play this kind of role in supporting self-regulated learning. Yet, while learning
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 903

analytics certainly provide important opportunities to give feedback on assessments, and per-
haps to support student self-assessment in more productive ways, current evidence does not
suggest that these possibilities are being realised. Papers on student-facing dashboards, for
example, typically do not provide evidence on whether learning analytics have an effect on
learning and what evidence exists is at best inconclusive (Bodily and Verbert, 2017a; Jivet et al.,
2017; Schwendimann et al., 2017). These empirical weaknesses are also underpinned by consider-
able conceptual weaknesses. Jivet et al. (2017), reviewing the theoretical underpinnings of learn-
ing analytics tools, found that while some research in the field made reference to self-regulated
learning, 73% of papers did not actually make explicit any theoretical basis for their work (see
also Kitto, Lupton, Davis and Waters, 2017).
For learning analytics to play an effective role in improving the feedback process surrounding
assessment and evaluation tasks there is a need to think in terms of two different types of
impact: first, the learning analytics tool should impact on learning. To do this, we should expect
that learning analytics tools would be informed by the concepts and ideas which emerge from
learning research. Panadero, Klug and J€arvel€a (2016), for example, have recently reviewed the
SRL measurement literature to provide a checklist of considerations for those developing SRL
tools, and this provides a useful framework upon which to build in order to ensure impact on
learning. If impact on learning has received little attention, the second kind of impact that we
should expect from learning analytics, impact on practice, has effectively received no attention at
all. Bodily and Verbert’s systematic review (2017a) found that there was little research about the
extent to which learning analytics tools were actually taken up by practicing teachers and stu-
dents in real-life learning settings, but where student use of tools was reported, it was typically
low. Similarly Jivet et al. (2017) found that the minority of studies in the field which provided a
theoretical framework relied almost exclusively on psychological concepts – more sociological
concerns for learning and assessment practices “in the wild” were not prioritised.

Designing SRL-informed learning analytics


Given that the focus here is on tools that can aid the development of self-regulated learning
through providing visual feedback to students, an understanding of the research on self-regu-
lated learning can play an important role in ensuring learning analytics have impact on learning.
We know, for example that SRL teaching approaches are consistently associated with learning
gains (Haller, Child and Walberg, 1988; Hattie, Biggs and Purdie, 1996; Dignath and Bu €ttner,
2008). SRL is also well adapted to higher education settings when the students’ activities are less
regulated by a teacher than would be the case in other parts of the formal education system
(Tormey, 2014), and where it is generally expected (by students and academic staff) that students
show a higher degree of independence in managing their own learning.
Self-regulated learning is defined by Zimmerman (2000) as involving self-generated thoughts,
feelings and actions that are systematically guided by personal goals. The self-regulation of learn-
ing behaviour can be conceptualized as a hierarchy of negative feedback loops (Carver and
Scheier, 2012). In such loops, a process is monitored via an indicator that is measured on a regu-
lar basis. Diagnosis consists of comparing the indicator at a given time with a desired value.
Whenever there is a discrepancy big enough some actions are triggered in order to reduce the
discrepancy. Self-regulation is, therefore, clearly linked to evaluation and self-evaluation.
Zimmerman’s definition focuses on the ‘self-generation’ of thoughts, feelings and behaviours,
which may be taken to imply a conscious process of exercising executive control over oneself
(Hadwin, J€arvel€a and Miller, 2018, p. 85). However, it is important to note that SRL also involves
behaviours which are self-regulatory, even if not consciously so. For example, a learner may con-
sciously remind themselves to check their own solution to a mathematical problem (i.e. to self-
evaluate), or they may simply be in the habit of checking automatically without having to self-
904 R. TORMEY ET AL.

generate this behaviour. Some researchers have argued that it is necessary for self-evaluation
and self-regulation strategies themselves to become automatic reflexes in order to ensure they
do not impose a cognitive load that ends up actually impeding learning (Winne, 2011).
Self-regulated learning is generally understood as having a phased process with a forethought
phase, a performance phase and a self-reflection phase. Of these three phases, SRL interventions
during the performance phase may be particularly challenging due to the aforementioned limits
of cognitive load. In addition, SRL also refers to different levels of thinking. As Hadwin, J€arvel€a
and Miller (2018) noted, for example, some models of SRL are focused on macro-organization of
learning processes (including large scale planning such as “First I start studying 2 weeks before
exams, and I pace myself”) while others operate at a meso-level of organising learning in specific
contexts such as “I make an outline before I write my paper” (both examples from Zimmerman
[2013, p. 138]), while still more focus on micro-level of the individual question; “I made an arith-
metic mistake in the second exercise”. There is evidence that although students typically prefer
to get direct feedback at the level of the task (micro-level), feedback at the level of the process
(meso-level) has, in general, a more positive impact on learning (Hattie and Timperley, 2007).
Evidence about teaching people SRL strategies suggests that such training is most effective
when it is linked to the discipline in which it is to be used rather than being taught as a general
or all-purpose package of portable “study” skills (Hattie, Biggs and Purdie, 1996). In addition,
learners do not only need feedback about what is going right and wrong with their disciplinary
thinking strategies, they also need information about how to improve (Hattie and
Timperley, 2007).
While some learning analytics tools focus on the use of ‘trace’ data that students (often unwit-
tingly) leave behind on learning management systems, there is evidence that data which is con-
sciously produced by students may be particularly suited to the development of self-regulation;
as Panadero et al., (2016) have recently highlighted, measuring SRL may, in itself, constitute an
intervention that can enhance SRL practices. An example of such ‘measurement þ intervention’
tools is the mathematics learning diaries used by Schmitz and Perels (2011), who found that, ask-
ing students to record observations on homework increased their learning. This may provide an
interesting avenue for development for self-regulation learning analytics tools.
We have noted that few learning analytics tools seem to start with a clear pedagogical vision,
and, there is little evidence at present of them impacting on learning (Jivet et al., 2017;
Schwendimann et al., 2017). By drawing on the research on self-regulated learning, we can iden-
tify the kinds of issues that designers of learning analytics tools should consider if they want
their tool to have an impact on learning. The issue here is not to provide design prescriptions
but to outline the kinds of issues that should be considered in designing a learning analytics
tool which is intended to improve student learning and to be used by learners and lecturers: the
goal is, therefore, to identify the kinds of spaces within which designers should be creative
rather than to propose a ‘single best model’ of learning analytics. The design issues emergent
from this literature are:

1. How can the tool be designed so that students see it as giving feedback on self-regulation prac-
tices directly relevant to their discipline (rather than only at the level of discipline-independent
study skills or progress)?
2. Should the tool focus on what students do before learning, during learning, or after learning
(noting that the cognitive load of self-evaluating practices during learning might be difficult
to manage if the learning task itself is complex)?
3. Can self-regulation processes become automatic reflexes, rather than having to be actively
monitored by students?
4. Should the tool focus on organization of study (macro), on context-specific thinking proc-
esses (meso) or on performance of specific tasks (micro) (recognizing that students often
prefer feedback on a micro-level, but usually benefit most from feedback on a meso level)?
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 905

5. How can the tool indicate to the student strategies for improvement (as distinct to simply
telling them how they are performing)?
6. Can measurement of SRL be used as a self-evaluation intervention in its own right?

Designing learning analytics informed by the social practice of university


teaching, learning and assessment
IF SRL holds out the possibility of designing assessment and reflection tools that may have an
impact on learning, how can we ensure that we develop tools that also have the possibility of
providing impact on practice? The low rates of use of learning analytics fits with a broader pat-
tern of low adoption of computerized teaching and learning supports in education in general. As
Cuban, Kirkpatrick and Peck (2001, p. 816) have written in describing the history of technological
innovations aimed at improving teaching: the ‘design, adoption, and implementation of new
technologies … have had a long history that invites little optimism’.
To understand the low rates of adoption of technology-based assessment and teaching tools
it is useful to think of learning and assessment as being social practices. The term social practice
refers at its most basic to the study of what people do: ‘what people do is a social phenomen-
on … and all social life can be interpreted as consisting of a series of clusters of practices in dif-
ferent fields of activity, within families, friendship groups, at work and so on’ (Saunders, 2011, p.
2). While SRL emphasises individual agency, social practice research recognises that our practices
take place in contexts in which some groups have more power than others to define what is
seen as valid (Bourdieu, 1990a; James, 2000; Leathwood, 2005). Although differences in emphasis
between SRL and social practice research exist, there are also clear points of contact between
the two bodies of research: while SRL research starts from the perspective that learners often
non-reflectively engage in sub-optimal learning and problem-solving approaches, social practice
researchers are also interested in the kinds of things people do non-reflectively – practices which
represent ‘the pre-verbal taking for granted of the world … that treats the body as a living
memory pad, an automation that “leads the mind unconsciously along with it”’ (Bourdieu,
1990b, p. 68).
While social practice researchers are more likely to present their own research as a critique of
the status quo than as a template for action, the concept of ‘reflexivity’ (Bourdieu and
Wacquant, 1992) extends to the social context the kinds of self-reflection which are intrinsic to
SRL. Indeed, Bourdieu has argued that such socio-reflexivity offers ‘some of the most efficacious
means of attaining the freedom from social determinisms which is possible only through know-
ledge of those very determinisms’ (Bourdieu, 1998, p. ix).
So, what do we see if we look at student and lecturer assessment activities in higher educa-
tion as practices, and how does that help us to design learning analytics tools that would aid
students in being more effective in self-regulation? First, practices are extremely hard to change,
partially because they are automated, pre-reflective activities and partially because they are
social and not individual. Seen as embedded in a social context, practices are performed in social
settings and interlock with the practices of others. While SRL research focuses on how the
learner can change their own practice, it does not ask how their practices interact with the prac-
tices of teachers or other learners, and how changing one part of this system of interrelated
practices may impact upon other parts of the system. Assessments are ‘co-constructed in com-
munities of practice … [and] viewed as a messy practice where multiple subjectivities and con-
tingencies affect the ways that judgements are made about students’ work’ (Orr, 2007, p. 647).
Designing a tool that will impact on the practice of self-assessment needs to consider these mul-
tiple subjectivities and contingencies.
These multiple subjectivities are not simply ‘multiple’ in the sense that there are multiple actors
involved but also ‘multiple’ in the sense that any one actor can ascribe a series of different meanings
906 R. TORMEY ET AL.

to any given situation. Students ascribe a range of meanings and values to self-assessment practices,
ascribing differential values to the product and processes of learning depending on how it is organ-
ised and how others in their social milieu communicate a sense of ‘what counts’ (Creme, 2005).
Likewise, for faculty members, assessing, marking and giving feedback can have multiple meanings
including that of administrative duty, undervalued labour, or opportunity for dialogue with students
(Tuck, 2012). Indeed, the idea that dialogue is a meaningful goal for lecturers in assessment situa-
tions is something that emerges in a number of studies of assessment (Torrance, 2000;
Carless, 2006).
In the context of an analysis of power relations, the SRL focus on individual agency can be
seen as an ideological device which focuses attention for a learner’s poor assessment perform-
ance on the learner’s own behaviour, ensuring that the decisions made in shaping formal and
hidden curriculum are left unquestioned. The design of a learning analytics tool should be sensi-
tive to issues of power and inequalities within the context in which it is used.
We have noted that there is little evidence at present of learning analytics tools actually being
used by students or teachers (Bodily and Verbert, 2017a). Drawing on the research on social
practice we can identify the kinds of issues that designers of learning analytics tools should con-
sider if they want their tool to have an impact on practice. Once more, the issue here is not to
provide design prescriptions but to outline the kinds of issues that should be considered during
the design process.

1. How can the tool explicitly provide members of the community with opportunities to reflect
upon the pre-verbal or implicit behaviours (practices) that are impacting upon learning
and assessment?
2. Can the tool be designed so that it builds on existing practices within a community rather
than seeking to develop entirely new practices?
3. Can the tool be designed so that it is seen to be useful to multiple actors within the com-
munity of practice (especially lecturers as well as students)?
4. How can the tool ensure that it is sensitive to power and inequality issues within
the community?

The learning companion: a case study


Having identified the kinds of issues that should be taken into account in designing a self-
evaluation learning analytics tool, this section describes how these issues were taken into account
in the design of one such tool: the Learning Companion, which is designed to improve students’
skills in problem solving in scientific and mathematical exercises in the context of science and
engineering degree programmes. The Learning Companion collects data in the form of a self-report
learning diary which students complete after they have worked on mathematical exercises. This
data is then visually represented back to students in a dashboard along with personalized recom-
mendations and learning resources for improving their self-regulation skills.
Teachers set up the diary with a list of exercises students are expected to solve in each class.
Students complete a standardised diary questionnaire for each exercise, logging the number of
exercises they attempted, whether they succeeded or not as well as the difficulties they encoun-
tered in the process. The questionnaire includes a predefined list of difficulties which, while gen-
eric, relates to quantitative problem solving. For example, one possible difficulty they may
choose is: “I used the right method but I made calculation errors (arithmetic, algebraic, trigono-
metric, etc.)”. Because the questionnaire is standardized, it does not include reference to the par-
ticularities of each question but rather addresses the more general processes of mathematical
problem solving. For instance, a difficulty like, “I could not see how to start” relates to a failure
in problem analysis, whereas “I tried a method and got stuck” suggests a difficulty in debugging.
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 907

The list of difficulties provided were developed with reference to literature on student problem
solving and to research carried out with students in the institution. Students also contributed to
the phrasing of the items included in the questionnaire to ensure that they made sense to stu-
dents. If students do not find a questionnaire response that matches their particular difficulty,
they can also enter their own difficulty in plain text.
It is expected that, in filling out the questionnaire, students review each exercise and clarify
for themselves what their difficulties are. Students then see their data on a dashboard which is
designed to give them: a) feedback on what kind of recurrent difficulties they are having (using
a bar chart), b) targeted advice designed to help them address their most common difficulties
(in short texts), and c) an indication as to how they compare to their class more generally in
terms of the number of exercises they have completed and the difficulties they are experiencing.
Presented both in a visual and textual format, this feedback and the associated recommenda-
tions are designed to help them identify aspects of the problem-solving process that they should
work on improving. The advice includes a short text on the difficulty they are experiencing,
access to videos to learn more about how to address the difficulty, and short profiles of ‘self-
regulation role models’ including people drawn from scientific fields like Andrew Wiles and
Sheryl Sandberg, and from high performance sports, including Serena Williams and Stan
Warwinka. The teacher version of the dashboard displays the same information as the student
dashboard but aggregated at the level of the class only (individual student profiles are not vis-
ible to the teacher).
How was this approach was influenced by the design issues which emerge from the SRL and
social practice literature (since the design process does not follow each issue in a stepwise fash-
ion, the issues are raised in an order different to that in which they are presented above)?
How can the tool give feedback on self-regulation relevant to their discipline? Student success in
the first year of the programme depends heavily on their ability to solve quantitative problems
in physics and mathematics. Research on solving quantitative problems has identified that self-
regulation heuristics for analysing a problem, monitoring during problem-solving, and analysis of
a solution after completion all play important roles (Schoenfeld, 1992, 2013). It made sense,
therefore, to target these problem-solving self-regulation skills. When students identify a particu-
lar difficulty with an exercise (“I could not see how to start”, for example), the Learning
Companion gives them a graphical indication as to how frequently this problem has arisen for
them, which allows them to become aware of their recurrent difficulties. It also indicates that
this suggests a difficulty with analysing problems effectively. Finally, it recommends additional
resources to help them improve this skill.
How can the tool make explicit the practices which affect learning? One particular challenge in
learning to solve non-routine quantitative problems is that they look a lot like routine problems.
Students have typically completed many routine problems in high school and have come to uni-
versity having developed a set of implicit practices appropriate for solving routine problems.
These include quickly scanning the surface features of the problem to see whether it looks like
problems they have seen before (Chi, Feltovich and Glaser, 1981; Kober, 2015, pp. 72–77), decid-
ing on a solution method quickly, and failing to monitor or adjust their progress while problem
solving (Schoenfeld, 1992, p. 356). These practices are reinforced by beliefs about problem solv-
ing, including the belief that solution methods should be known rather than figured out
(Schoenfeld, 1988).
Taken together this can be seen as a set of automated non-SRL behaviours (Winne, 2011): stu-
dent pre-verbal, non-reflexive practices do not adequately analyse, plan, monitor or review their
problem solving, many do not use a problem solving strategy appropriate to the problem before
them, and they generally do not reflect on this strategy and so are unaware that it is inappropri-
ate. It was decided therefore to focus on a tool for making explicit to the student the heuristics
of self-regulated problem solving. The self-evaluation questionnaire, therefore, included items
908 R. TORMEY ET AL.

designed to diagnose a lack of adequate analysis and monitoring and to give students feedback
and support on developing these skills.
Is the focus on before, during or after learning, and can the tool develop automatic SRL reflexes?
Research with first-year students identified that they were particularly weak in the self-reflection
phase of problem solving, with less than 30% reporting that they self-evaluated their perform-
ance on a problem after solving it, and observations suggesting that the actual percentage was
even lower (Dure, Leroux, Meng and Baril, 2016). It was therefore decided to focus the tool on
the reflection phase (i.e. after learning). As such, the students completed the questionnaire after
they had completed the exercises (data was not collected on their process of solving problems
while they were solving them). By scaffolding self-evaluative reflection in this way, it may well
become an automated practice for students.
Is the focus macro, meso or micro SRL? A needs analysis with students indicated that they
appeared to have problems at all three levels of self-regulation: many did not plan for adequate
study time early enough in the term (macro) (Dillenbourg, 2012), many used inappropriate prob-
lem-solving processes given the nature of the problems they faced (meso) (Dure et al., 2016),
and others did not appear to target their efforts on specific skill areas in which they were weak
(micro) (Campiche, Chandran, Lombardo and Tro €mel, 2015). The questionnaire was specifically
designed so that the difficulties it addressed referred to mathematical problem solving in general
rather than the specific features of any particular problem. This drew students’ attention to the
processes of problem solving which are transferable from one problem to another, rather than
directing their attention to the specific features of a given problem.
How can the tool build on existing practices within a community? Different universities have dif-
ferent cultural practices around student independent problem solving in mathematics and in sci-
entific disciplines. In the university in question, exercise classes follow what is often called a
travaux diriges model, whereby students get problems which they complete more or less indi-
vidually in exercise classes in which a teacher or tutor is available to answer questions or provide
assistance. Students are later provided with written solutions so that they can figure out their
own errors through comparing their homework with the model solutions. This normative practice
is reinforced by a series of cultural beliefs shared within the community of practice that makes
this practice seem not just normal but also desirable, including, for example, beliefs about the
importance of student independent work, as opposed to ‘teacher spoon-feeding’. A tool which
encouraged students to self-evaluate after problem solving fitted with this cultural practice and
had the added benefit of fitting with the high value placed on student independent work.
Can measurement of SRL be used as a self-evaluation intervention? The choice of which evi-
dence to present to learners in order to impact upon their self-regulation is crucial: too many
learning analytics dashboards display raw data about behaviour or grade scores that are difficult
to interpret but which nonetheless are expected by the designers to ‘magically’ affect the ways
in which learners or teachers control the learning process (Kitto et al., 2017). If it is intended that
the learning analytics tool will actually impact on self-regulation, then it should collect and dis-
play data about self-regulation. Previous evidence (e.g. Schmitz and Perels, 2011) indicated that
measuring students’ self-regulated learning behaviour by asking them to self-assess this behav-
iour was associated with significant learning gains. Therefore, it was decided to adopt a simi-
lar strategy.
Can the tool indicate to the student strategies for improvement? In the Learning Companion the
visualisations provided ensured students became aware of repeated patterns in their own diffi-
culties. Students were also provided with targeted advice based on the pattern of difficulties
which they identified. In this way both the data visualization and the targeted advice achieves a
translation between SRL levels: the analytics tool asks students to self-report at the micro-level
(e.g. I didn’t understand the concepts required for this exercise) but visualises and provides feed-
back at the meso-level (e.g. you should review lecture notes when attempting exercises).
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 909

Can the tool be useful to lecturers as well as students? The wider research evidence suggests
that in addition to seeing evaluation and feedback as an administrative chore and as underval-
ued work, university lecturers also see it as an opportunity for dialogue with students (Tuck,
2012). Similar sentiments were evident among lecturers in the context in question, including in,
for example, the considerable growth in use of classroom response systems (‘clicker’ systems) for
formative assessment purposes in lectures. While such classroom response systems operated in
the synchronous environment of the lecture, no similar tools were in use to allow feedback to
teachers in the (asynchronous) environment of the travaux diriges classes. The Learning
Companion provides teachers with feedback on homework without requiring significant add-
itional administrative chores from them. Since lecturers play a key role in defining what is valued
in this community of practice, tools which are integrated into teaching by the teacher stand a
better chance of being used by students.
How can the tool be sensitive to power and inequality issues? The idea that learning analytics
can increase student surveillance by teachers has been raised within the wider literature (e.g.
Wilson et al., 2017). While the existing travaux diriges practices meant that students lacked feed-
back on their learning, it also provided them with an anonymity which was clearly valued by stu-
dents. Discussion with student representatives made clear that they would not favour learning
analytics tools that provided teachers with individualised feedback on learners. Hence teachers
were provided with data at the level of the whole class. The questioning of power and inequality
in this setting also raised a second concern for a learning analytics tool: women are typically
under-represented in science and engineering education contexts and have been described as
experiencing a chilly climate in such settings (Barnard, Hassan, Bagilhole and Dainty, 2012;
Lichtenstein, Chen, Smith and Maldonado, 2014). Care was needed therefore to ensure that the
feedback provided to students did not reinforce implicit biases. Gender balance therefore
became an important feature in the choice of self-regulation role models (e.g. the inclusion of
both Andrew Wiles and Sheryl Sandberg).
It is important to remember that the Learning Companion is not being presented as repre-
senting the optimum model for learning analytics supported self-evaluation in higher education
– it is just one possible solution to the design issues which emerge from the SRL and social prac-
tice literature. The purpose of this paper is to focus on these issues that designers should bear
in mind if they want to develop a tool that has a chance of impacting on learning and on prac-
tice. The Learning Companion itself is at present in the initial stages of implementation and early
feedback suggests that it is seen by a number of the lecturers of first year students as addressing
a clear need. Since different lecturers are integrating it into their teaching using a number of
strategies, it will be possible to see how different use cases give rise to impact on practice.

Conclusion
Reviewing existing learning analytics approaches leaves the impression that learning analytics –
like so many previous computer-aided learning tools – are often little more than a solution in
search of a problem. It seems as if such tools are often developed by those with an interest in
algorithms, data analytics and data visualisation, but who hold naïve views of how learning and
teaching happens. Given how expensive learning analytics tools are to create, we should move
beyond the stage in which developers stand in a cornfield, like Kevin Costner in the movie Field
of Dreams, muttering to themselves ‘build it, and they will come’. The evidence suggests learners
do not come, and when they do, they may not learn very much. If we are to have impact on
learning and on practices we will need to do better than that.
The goal of this paper was to address the lack of theorisation underpinning the design of
many learning analytics tools ostensibly developed to aid self-evaluation. Drawing on research in
self-regulated learning and in social practices of teaching and assessment in higher education
910 R. TORMEY ET AL.

will not guarantee the success of a self-evaluation learning analytics tool. It will at least ensure
that, when we design tools, we have good reasons to expect that they might have an impact.

Disclosure statement
No potential conflict of interest was reported by the authors.

ORCID
Roland Tormey http://orcid.org/0000-0003-2502-9451
Patrick Jermann http://orcid.org/0000-0001-9199-2831

References
Barnard, S., T. Hassan, B. Bagilhole, and A. Dainty. 2012. “They’re Not Girly Girls’: An Exploration of Quantitative and
Qualitative Data on Engineering and Gender in Higher Education.” European Journal of Engineering Education
37(2): 193–204. doi:10.1080/03043797.2012.661702.
Bodily, R., and K. Verbert. 2017a. “Review of Research on Student-Facing Learning Analytics Dashboards and
Educational Recommender Systems.” IEEE Transactions on Learning Technologies 10(4): 405–418. doi:10.1109/TLT.
2017.2740172.
Bodily, R., and K. Verbert. 2017b. “Trends and Issues in Student-Facing Learning Analytics Reporting Systems
Research.” In Proceedings of the Seventh International Learning Analytics & Knowledge Conference, pp. 309–318.
March 13-17, 2017, ACM, Vancouver, BC, Canada .
Bourdieu, P. 1990a. Homo Academicus. Cambridge: Polity Press.
Bourdieu, P. 1990b. The Logic of Practice. Cambridge: Polity Press.
Bourdieu, P. 1998. Practical Reason. Cambridge: Polity Press.
Bourdieu, P., and L. Wacquant. 1992. An Invitation to Reflexive Sociology. Cambridge: Polity Press.
Campiche, P., O. Chandran, D. Lombardo, and A. Tro €mel. 2015. Identification of common errors in learning classical
mechanics. Lausanne: EPFL. Retrieved from http://bit.ly/errorsinmeca
Carless, D. 2006. “Differing Perceptions in the Feedback Process.” Studies in Higher Education 31(2): 219–233. doi:10.
1080/03075070600572132.
Carver, C. S., and M. F. Scheier. 2012. Attention and Self-Regulation: A Control-Theory Approach to Human Behavior.
New York, NY: Springer Science and Business Media.
Chi, M. T. H., P. J. Feltovich, and R. Glaser. 1981. “Categorisation and Representation of Physics Problems by Experts
and Novices.” Cognitive Science 5(2): 121–152. doi:10.1207/s15516709cog0502_2.
Creme, P. 2005. “Should Student Learning Journals Be Assessed?” Assessment and Evaluation in Higher Education
30(3): 287–296. 10.1080/02602930500063850
Cuban, L., H. Kirkpatrick, and C. Peck. 2001. “High Access and Low Use of Technologies in High School Classrooms:
Explaining an Apparent Paradox.” American Educational Research Journal 38(4): 813–834. doi:10.3102/
00028312038004813.
Dignath, C., and G. B€ uttner. 2008. “Components of Fostering Self-Regulated Learning among Students. A Meta-
Analysis on Intervention Studies at Primary and Secondary School Level.” Metacognition and Learning 3(3):
231–264. doi:10.1007/s11409-008-9029-x.
Dillenbourg, P. 2012. EPFL Campus 2011. Lausanne: Ecole polytechnique f ederale de Lausanne.
Dillenbourg, P., G. Zufferey, H. Alavi, P. Jermann, S. Do-Lenh, Q. Bonnard, … F. Kaplan. 2011. “Classroom
Orchestration: The Third Circle of Usability.” CSCL2011 Proceedings 1: 510–517.
Dure, L., P. Leroux, H. Meng, and C. Baril. 2016. How do students solve problems? Presented at the How People
Learn Student Presentations, Ecole polytechnique f ed
erale de Lausanne, Lausanne.
Ferguson, R., S. Barzilai, D. Ben-Zvi, C. A. Chinn, C. Herodotou, Y. Hod, … P. McAndrews. 2017. Innovating Pedagogy
2017 (Open University Innovation Report 6). Milton Keynes, UK: The Open University.
Greller, W., and H. Drachsler. 2012. Translating learning into numbers: A generic framework for learning analytics.
Educational Technology & Society 15(3): 42–57.
Hadwin, A., S. J€arvel€a, and M. Miller. 2018. “Self-Regulation, Co-Regulation, and Shared Regulation in Collaborative
Learning Environments.” In Handbook of Self-Regulation of Learning and Performance, edited by D. A. Schunk and
J. A. Greene, 2nd ed., 83–105. New York, NY: Routledge.
Haller, E. P., D. A. Child, and H. J. Walberg. 1988. “Can Comprehension Be Taught? A Quantitative Synthesis of
‘Metacognitive’ Studies.” Educational Researcher 17(9): 5. doi:10.2307/1175040.
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 911

Hattie, J., J. Biggs, and N. Purdie. 1996. “Effects of Learning Skills Interventions on Student Learning: A Meta-
Analysis.” Review of Educational Research 66(2): 99. doi:10.2307/1170605.
Hattie, J., and H. Timperley. 2007. “The Power of Feedback.” Review of Educational Research 77(1): 81–112. doi:10.
3102/003465430298487.
James, D. 2000. “Making the Graduate; Perspectives on Student Experience of Assessment in Higher Education.” In
Assesment: Social Practice and Social Product, edited by A. Filer. London: Routledge Falmer.
Jermann, P., and P. Dillenbourg. 2008. “Group Mirrors to Support Interaction Regulation in Collaborative Problem
Solving.” Computers & Education 51(1): 279–296. doi:10.1016/j.compedu.2007.05.012.
Jivet, I., M. Scheffel, H. Drachsler, and M. Specht. 2017. “Awareness Is Not Enough: Pitfalls of Learning Analytics
Dashboards in the Educational Practice.” In Data Driven Approaches in Digital Education, 82–96. Tallinn, Estonia:
Springer Nature.
Johnson, L., S. Adams Becker, and M. Cummins. 2016. The NMC Horizon Report: 2016 Higher Education Edition.
Retrieved from http://www.deslibris.ca/ID/10090169
Kitto, K., M. Lupton, K. Davis, and Z. Waters. 2017. “Designing for Student-Facing Learning Analytics.” Australasian
Journal of Educational Technology 33(5): 152–168. doi:10.14742/ajet.3607.
Kober, N. 2015. Reaching Students: What Research Says about Effective Instruction in Undergraduate Science and
Engineering. Washington, DC: National Academies Press. Retrieved from http://www.nap.edu/catalog/18687
Leathwood, C. 2005. “Assessment Policy and Practice in Higher Education: Purpose, Standards and Equity.”
Assessment and Evaluation in Higher Education 30(3): 307–324. 10.1080/02602930500063876
Lichtenstein, G., H. L. Chen, K. A. Smith, and T. A. Maldonado. 2014. “Retention and Persistence of Women and
Minorities along the Engineering Pathway in the United States.” In Cambridge Handbook of Engineering
Education Research, edited by A. Johri and B. M. Olds, 311–334. New York, NY: Cambridge University Press.
Orr, S. 2007. “Assessment Moderation: Constructing the Marks and Constructing the Students.” Assessment &
Evaluation in Higher Education 32(6): 645–656. doi:10.1080/02602930601117068.
Panadero, E., J. Klug, and S. J€arvel€a. 2016. “Third Wave of Measurement in the Self-Regulated Learning Field: When
Measurement and Intervention Come Hand in Hand.” Scandinavian Journal of Educational Research 60(6):
723–735. doi:10.1080/00313831.2015.1066436.
Saunders, M. 2011. “Setting the Scene: The Four Domains of Evaluative Practice in Higher Education.” In
Reconceptualising Evaluation in Higher Education; the Practice Turn, edited by M. Saunders, P. R. Trowler, and V.
Bamber. Maidenhead, Berkshire: Open University Press.
Schmitz, B., and F. Perels. 2011. “Self-Monitoring of Self-Regulation during Math Homework Behaviour Using
Standardized Diaries.” Metacognition and Learning 6(3): 255–273. doi:10.1007/s11409-011-9076-6.
Schoenfeld, A. H. 1988. “When Good Teaching Leads to Bad Results: The Disaster of ‘Well Taught’ Mathematics
Courses.” Educational Psychologist 23(2): 145–166. doi:10.1207/s15326985ep2302_5.
Schoenfeld, A. H. 1992. “Learning to Think Mathematicalls: Problem-Solving, Metacognition and Making Sense in
Mathematics.” In Handbook for Research on Mathematics Teaching and Learning, edited by D. Grouws. New York:
Macmillan. doi:10.1177/002205741619600202.
Schoenfeld, A. H. 2013. “Reflections on Problem Solving Theory and Practice.” The Mathematics Enthusiast 10(1/2):
9.
Schwendimann, B. A., M. J. Rodriguez-Triana, A. Vozniuk, L. P. Prieto, M. S. Boroujeni, A. Holzer, D. Gillet, and P.
Dillenbourg. 2017. “Perceiving Learning at a Glance: A Systematic Literature Review of Learning Dashboard
Research.” IEEE Transactions on Learning Technologies 10(1): 30–41. doi:10.1109/TLT.2016.2599522.
Siemens, G., and D. Gasevic. 2012. “Guest editorial-Learning and Knowledge Analytics.” Educational Technology and
Society 15(3): 1–2.
Tormey, R. 2014. “The Centre Cannot Hold: Untangling Two Different Trajectories of the ‘Approaches to Learning’
Framework.” Teaching in Higher Education 19(1): 1–12. doi:10.1080/13562517.2013.827648.
Torrance, H. 2000. “Postmodernism and Educational Assessment.” In Assesment: Social Practice and Social Product,
edited by A. Filer. London: Routledge Falmer.
Tuck, J. 2012. “Feedback-Giving as Social Practice: Teachers’ Perspectives on Feedback as Institutional Requirement,
Work and Dialogue.” Teaching in Higher Education 17(2): 209–221. doi:10.1080/13562517.2011.611870.
Verbert, K., S. Govaerts, E. Duval, J. L. Santos, F. Van Assche, G. Parra, and J. Klerkx. 2014. “Learning Dashboards: An
Overview and Future Research Opportunities.” Personal and Ubiquitous Computing 18(6): 1499–1514. 10.1007/
s00779-013-0751-2
Wilson, A., C. Watson, T. L. Thompson, V. Drew, and S. Doyle. 2017. “Learning Analytics: Challenges and
Limitations.” Teaching in Higher Education 22(8): 991–1007. doi:10.1080/13562517.2017.1332026.
Winne, P. H. 2011. “A Cognitive and Metacognitive Analysis of Self-Regulated Learning.” In Handbook of Self-
Regulation of Learning and Performance, 15–32. New York, NY: Routledge.
Zimmerman, B. J. 2000. “Self-Efficacy: An Essential Motive to Learn.” Contemporary Educational Psychology 25(1):
82–91. doi:10.1006/ceps.1999.1016.
Zimmerman, B. J. 2013. “From Cognitive Modeling to Self-Regulation: A Social Cognitive Career Path.” Educational
Psychologist 48(3): 135–147. doi:10.1080/00461520.2013.794676.

You might also like