You are on page 1of 13

Studies in Higher Education, 2015

Vol. 40, No. 9, 1495–1506, http://dx.doi.org/10.1080/03075079.2013.868878

Making judgements: investigating the process of composing and


receiving peer feedback
Teresa McConlogue*

Centre for the Advancement of Learning and Teaching, UCL, 1-19 Torrington Place,
London, WC1E 7HB, UK

Recent studies have argued that tutor feedback is failing to support students’
progression. The potential for peer feedback, i.e. feedback composed by peer
assessors, to support learning has been under researched. The aim of this paper
was to explore a case study of a peer assessor composing and receiving peer
feedback. The paper reports a case study tracking a peer assessor through the
process of grading and composing peer comments and her reactions to receiving
peer feedback. The data consist of feedback comments, reflections and a series
of interviews. It was found that while the process of composing feedback
comments was viewed positively, receiving comments was, on the whole,
viewed negatively. The author suggests helping students to become peer
assessors is a long-term process; initial peer feedback should be withheld. As
students develop expertise in the subject matter and in composing feedback,
comments could be exchanged.
Keywords: assessment; dialogue; feedback on student writing; peer assessment;
peer feedback

Introduction
Higher education students are engaged in complex learning, i.e. learning that requires
synthesising a range of complex ideas, conducting detailed and critical analysis of
research studies and developing and supporting cogent arguments. In higher education
in the UK acquisition of this knowledge is typically assessed through long written
assignments, devised by tutors as a means of assessing to what extent students can
demonstrate their understandings of ‘higher knowledge’ (Northedge 2003, 19).
Tutors often spend a considerable amount of time marking written assignments and
writing feedback, telling students how to improve their work in the hope that this
will help them ‘bridge the gap’ (Sadler 2010, 536) between their current performance
and where they need to get to. However, commentators (Price et al. 2010; Sadler 2010;
Nicol 2010) have argued that feedback is failing to support student progression. As
Sadler (2010) and Price et al. (2010) point out, for some students feedback has little
impact. There seems to be a feedback conundrum, with the National Student Survey
(Higher Education Funding Council for England; HEFCE 2010) reporting that
student satisfaction with feedback is lower than with any other indicator, and lowest
for question 9 ‘Feedback on my work has helped me clarify things I did not

*Email: t.mcconlogue@ucl.ac.uk

© 2014 Society for Research into Higher Education


1496 T. McConlogue

understand.’ Students may request more feedback, but Price et al. (2010, 279) point out
that they often don’t collect feedback and if they do, don’t read it or don’t understand it.
In this paper, I consider why feedback may be failing to impact on students’ perform-
ance. Reporting on a small scale study which investigated a peer assessors’ descriptions
of the process of composing feedback, I explore how peer assessment and composing
peer feedback might help students develop understandings of where they need to get to.
Three main reasons have been suggested to account for students’ lack of engage-
ment with written feedback: the discourse of feedback, the tacit dimension of feedback
and the transmissive/monologic nature of written feedback. Written feedback is typi-
cally couched in a discourse that is part of the tutor’s professional practice and that rep-
resents underlying understandings of ways in which knowledge is represented, broadly
speaking, in their field of enquiry.
The vast range of ‘tacit knowing’ (Polanyi 1964) that tutors draw on in order to
make judgements about students’ work cannot be adequately represented in a small
set of explicit assessment criteria (Sadler 2009). Moreover as it’s difficult to articulate
‘tacit knowing’, tutors may reach for assessment terms such as ‘critically analyse’,
‘evaluate’ and ‘reach a synthesis’ (Lea and Street 1998, 163) and ‘be more analytical’
(Price et al. 2010, 284). The meaning of these terms may vary depending on the par-
ticular context, so while the terms themselves may be familiar to students, their mean-
ings, in a particular context, as used by a particular tutor, may not be (Carless 2006;
Giltrow and Valiquette 1994). What counts as ‘evidence’ in one context may not in
another. Similarly expectations around what constitutes an analysis may vary depend-
ing on the tutor’s epistemological understandings.
When tutors set assignments they do not always know what students will produce.
Tutors may often set assignments with no clear idea of what they want from the student:
their ideas clarify when they see the variety of work that students produce (Sadler,
2014). It is through seeing variation (‘comparability’) that tutors come to know the
‘possible moves’ (Sadler 2010, 540) that students can produce in response to the
task set. Through seeing variation in student work, the tutor is in a position to accrue
a vast store of ‘tacit knowing’ about ways of addressing the task and thus tutor knowl-
edge of how to tackle the task is enhanced. Tutors begin to develop their understandings
of quality through reading and marking a range of written student assignments (Sadler
2010, 6). It then becomes the responsibility of the tutor to articulate these judgements of
quality to students, and this is not an easy task. Tutors’ understandings of quality may
be drawn from their own educational experiences and their professional experiences of
marking and moderation. Much of this knowledge is tacit and difficult to verbalise, to
‘tell’ to students. One tutor in Lea and Street’s (1998, 163) study states: ‘I know a good
essay when I see it but I cannot describe how to write it’, illustrating the difficulty this
tutor has in verbalising his/her understanding of quality.
A third reason for student lack of engagement is the transmissive nature of written
feedback (Sadler 2010). As Lillis (2003) points out, written feedback is monologic, i.e.
one-way communication from tutor to student. In the mass higher education system in
the UK, there is often little opportunity for dialogue, so students may have few oppor-
tunities to query feedback, and hence may have limited or erroneous understandings of
the messages that tutors are trying to convey. Even if students do understand these
messages, as Sadler (2010) points out, they are being told how to improve their
work. The feedback message is essentially transmissive. It’s a ‘one way message’
(Sadler 2010, 539), not a dialogue between teacher and student, and students are not
well equipped to decode the message. Moreover, telling students to critically
Studies in Higher Education 1497

analyse, for example, does not help them understand what they need to do in order to
achieve this goal.
This reasoning suggests that for feedback to have impact, students need to gain a
better understanding of the assessment process and of quality in student assignments.
Students often produce assignments individually, in isolation. They may not see their
peers’ work, and if they do exchange assignments with friends or for peer review,
they are unlikely to see a range of work. The student then is not in a position to
accrue a tacit understanding of quality in written assignments and this makes it difficult
for students to make judgements about the quality of their own work and how it could
be developed. As a result, students can become dependent on tutor judgements.
However, they may, for the reasons discussed above, find it difficult to understand
tutor advice, and if they do understand the advice, still not know how to improve
their work. For students to become autonomous learners they need to be actively
involved in developing understandings of quality and in thinking through how to
apply those understandings to their own work. Nicol (2010, 14) believes that students
are ‘active constructors’ of feedback and that as ‘[p]roducing feedback is more cogni-
tively demanding than just receiving it’, then students need opportunities to actively
construct and question feedback. Peer assessment, organised so that students see a
range of their peers’ work, gives students an opportunity to make and articulate
judgements.

Peer feedback
Peer feedback, feedback comments from ‘equal status learners’ (Gielen et al. 2010,
305) differs from tutor feedback in that peers are not subject experts, nor do they
have the same understanding of the assessment process or as much experience of
making assessment judgements as tutors. Cho and Schunn (2007) describe differences
in peer and tutor feedback; students are not subject experts and therefore may give inac-
curate feedback and focus on style rather than content. Kim (2009) found that students
were concerned about their peers’ abilities to correctly assess assignments. However,
the quality of peer feedback may depend on familiarity with a subject and the stage
in the course at which feedback is given. Wen and Tsai (2008) report on a study
which tracked students through several rounds of giving feedback on peers’ research
proposals. Initially peers’ comments were basic, ‘centred on questions like “what’s
missing?”’ (Wen and Tsai 2008, 62). As students received more input on
educational research methods and became more practised in giving feedback, the
level of sophistication increased; in the last round advice and suggestions for improve-
ments were given.
We might assume that expert tutor feedback leads to greater improvements and that
peer feedback disadvantages students. However, Yorke (2003) suggests that overreli-
ance on tutor feedback can induce ‘learned dependence’ where students become depen-
dent on tutor feedback and do not develop their own understandings of quality. They
become ‘cue-seekers’, ‘hunting for hints’ (Yorke 2003, 489) to maximise grades,
rather than autonomous learners, capable of judging their own and their peers’ work.
Peer assessment may prompt students to check corrections as they recognise the inac-
curacy of their peers’ feedback.
It is not only inaccurate feedback that makes students anxious about peer assess-
ment. Many studies report that students are concerned about unfairness and bias and
1498 T. McConlogue

are distrustful of their peers’ marking (Liu and Carless 2006; Vu and Dall’Alba 2007;
McConlogue 2012).
Much of the literature on peer assessment is concerned with the reliability of peer
assessor marking and comparisons with tutor marking (Liu and Carless 2006). Rela-
tively little attention has been paid to peer assessor feedback (Patchan, Charney and
Schunn 2009), receiving peer feedback (Kim 2009) and peer assessor processes, how
they arrive at a judgement about the quality of the work they are assessing. As Nicol
(2011, 4) points out, ‘What has not been studied is the untapped potential of peer feed-
back as a process whereby students construct their understanding and develop critical
judgment by reviewing and commenting on the work of others.’ This paper aims to
address this gap by presenting an explorative in-depth case study of a peer assessor,
investigating the process of composing peer feedback.

The case study


Thinking Writing (TW) at Queen Mary University of London (QMUL) supports aca-
demic tutors to develop the teaching and assessment of writing in their courses. For
several years, TW has been supporting tutors in the School of Engineering and Material
Sciences (SEMS) to implement peer assessment. In 2008–9 peer assessment was
implemented in levels 5 and 7 (second year undergraduate and Masters level)
courses. This work was initially funded by a mini-project grant from the Higher Edu-
cation Academy Engineering Subject Centre. These implementations were repeated in
2009–10, and a new implementation was devised for a large class (around 280 students)
of level 4 students (first year undergraduate) following a lecture based course. My role
was to advise on how to implement peer assessment and evaluate the implementations,
initially through questionnaires and focus groups (McConlogue 2012) and finally
through a detailed study of a peer assessor, which this paper reports on.
In these implementations, peer assessment was organised to facilitate dialogue
between tutor and students (Carless 2006). Following online submission of the assign-
ments, a rehearsal marking session (Falchikov 2003) was organised where students read
and wrote comments on three sample reports, judged by the tutor to represent a range of
quality. Reports were sent to students before the rehearsal marking session and they
were asked to individually read, write comments and grade the reports. In the rehearsal
marking session, students compared comments and grades in groups and this led to a
discussion on the meaning of the assessment criteria. The tutor then led a class discus-
sion, answering questions and explaining how she made judgements about the quality
of student writing. This fits with Nicol’s (2010) suggestion of designing opportunities
for dialogue before, during and after the assignment. Bloxham and West (2007, 84)
comment on the importance of ‘verbal clarification’ of tutor expectations which
helped students improve performance. Written feedback on assignments was not
enough. Students needed to hear tutors talk about their expectations and needed to ques-
tion tutors; this can be done in a rehearsal marking session.
Following the rehearsal marking session, the tutor reviewed the submitted student
assignments, roughly sorted them into groups, depending on his/her judgement of the
quality of the assignment and students were allocated four or six assignments for
marking that in the tutor’s view represented a range. Peer assessors were given two
weeks to mark the reports and submit written feedback on each report. Marks were
uploaded and distributed online along with peer reviewer feedback comments. The
implementations were initially evaluated through a paper questionnaire and focus
Studies in Higher Education 1499

groups; however, I wanted a better understanding of how students experienced peer


assessment and designed a small scale study to track a student through the peer assess-
ment process, focusing on investigating how she composed feedback comments, made
judgements about her peers’ work and developed understandings of quality. In design-
ing this study, I drew on the principles of naturalistic inquiry (Lincoln and Guba 1985,
2000; Erlandson et al. 1993) and Kvale’s (1996) idea of the interview as conversation. I
decided to carry out case study research, a methodology widely used in educational
research when the researcher is seeking to understand the context.
While experimental research may be an appropriate way of investigating the natural
world, Hamilton (1980) argues it is problematic in educational research, particularly
research on learning. This is because educational phenomena are not natural phenomena
but ‘social artefacts’ which are socially constructed and socially complex. It follows that
to understand these phenomena we need a way of exploring and capturing this complex-
ity. Any learning context contains interplay of multiple subtle variables; attempts at
manipulating these variables distort the context and the findings. Case studies attempt
to delve into the complexity of the situation, exploring the uniqueness of the context.
Thus case study research is ‘strong in reality’ (Adelman, Jenkins and Kemmis 1980);
case study research ‘celebrates the particular and unique’ (Simons 1996). It follows
then that it is important to show what is particular and unique about the context of the
case study through collecting a range of data over a longer period of time. So for
example, instead of one-off interviews with participants or a focus group, the researcher
would spend longer in the field, collect different types of data and return to participants
for subsequent follow-up interviews in which s/he would seek to check details to acquire
a deeper understanding of the participant’s world.
In an interpretivist research paradigm (Lincoln and Guba 1985; Denzin 1989) the
researcher would not seek to generalise from the case study, but rather to offer a
detailed interpretation of the data. While some commentators argue that case study
research should allow generalisations (Yin 2009, 44), I would agree with Hamilton
(1980), that in case study research generalisations are inappropriate; indeed Hamilton
argues that in order to generalise we need to assume a static view of reality, to make
generalisations across time and space. Stake (1995, 4) describes ‘intrinsic case study’
research where the researcher is concerned to understand the particularities of a case:

Case study research is not sampling research. We do not study a case primarily to under-
stand other cases. Our first obligation is to understand this one case.

And explains:

The real business of case study is particularisation, not generalisation. We take a particular
case and come to know it well. (Stake 1995, 8)

The case
To investigate one student’s experience of peer assessment in 2009–10 and to ‘come to
know it well’, I tracked a peer assessor from the level 5 (second year undergraduate)
implementation. This peer assessment happened in semester A so I could track the
student into semester B. The participant was self-selected and brought her coursework,
peer feedback and the feedback she had composed to the interviews in which she
described how she wrote comments and how she interpreted peer assessor comments.
1500 T. McConlogue

A case study of one peer assessor gave me an opportunity to collect more in-depth data
so that I amassed a small but rich dataset.
The data consist of:

. preliminary notes she made on each report;


. the peer feedback comments she composed (four sets of comments);
. three one-hour interviews plus informal chats;
. written reflections on her experience of peer assessment;
. her own report, with peer feedback comments and grades; and
. her assessed coursework in the next semester, with grades and feedback (two
more reports and a short writing task).

Ferdous: a case study on the process of composing and receiving peer feedback
Ferdous’ background is not untypical of undergraduate students at this medium sized
university which draws students from an area of London which has a diverse and
vibrant mix of ethnic communities. She was born in Somalia and, although she was
schooled in the UK since year 7 (12 years old), she considers English to be her third
language (with Somali and Arabic her first and second languages respectively).
Ferdous has had some previous experience of making assessment judgements, as she
marked some work at secondary school, using guidelines and a marking scheme pro-
vided by her teachers. At the time of the study, Ferdous was a level 5 (second year
undergraduate), Medical Engineering student. She was a diligent, conscientious
student who achieved a good grade for her peer assessed assignment, and excellent
grades for subsequent assignments. Her story provides a rich insight into one peer
assessors’ experience of making and articulating judgements of quality.

How she set about peer assessing: composing comments


Ferdous was extremely industrious about writing feedback comments for peers. She
started peer assessing early, doing a little work on her four allotted reports each day
throughout the two-week period. She began by skimming the reports to check details
e.g. if they had all the necessary sections, conformed to word length, etc. She wrote
notes on this preliminary work explaining:

… first of all, I’ve written just minor comments, things like, whether or not the person had
managed to fit it into five pages.

So, in common with peer assessors in the first round of Wen and Tsai’s study (2008),
Ferdous began by looking at the basics. She then went on to read the reports in more
detail and compose feedback. She explained that she used the assessment criteria her
tutor gave her to guide her when marking the report:

I was marking it on the basis of what [the Professor] has given us as the criteria of [how]
we should write a report …

However, she did not rely solely on assessment criteria but drew on other documents
and the sample reports from the rehearsal marking session which all seemed to help for-
mulate her understanding of quality in report writing.
Studies in Higher Education 1501

The assessment criteria from, not just from the sample that I was given, but I was looking
at three things; I was looking at the sample that we were provided [with] and the sample
comments that [the researcher] had given us. And I was also looking at the ideal report, or
the perfect report, [we] were given in the class [rehearsal marking]. And, finally, I was
looking at the requirements [task instructions] that were given to us before we wrote
the report.

She then went back over the reports, commenting in detail on specific sections. When
writing comments she followed the structure of the sample feedback comments she’d
been given in the rehearsal marking session (general comment, specific points and
three things to improve). She composed long and detailed feedback comments of
around 500 words on each report, explaining why she thought it was necessary to
write such extensive comments:

I thought maybe I’m overdoing it, but then I thought, you know what, if someone was
trying to mark my report, I’d want the best possible comments …

The thoroughness with which Ferdous approached this task and her detailed and com-
prehensive comments demonstrate her commitment to giving extensive peer feedback.
Ferdous estimated that she spent about 4–5 hours on marking and writing comments on
four reports. In this respect she seems to be practising what Cartney (2010, 558) calls
‘academic altruism’, as students strive to provide their peers with good quality feedback
comments.
When asked what Ferdous felt she learned from reading her peers’ reports she said
she came to the understanding that there’s no one correct way of writing a laboratory
report. She identified strengths even in reports that she considered to be ‘weak’.

Even the very poor reports, I would say that there was always something that I thought,
‘oh, hang on a minute, I haven’t done that in my report’.

So, I think, definitely, in every report, there was something that I reflected back on my
own. And I’m happy to be that kind of person, who is always looking deeper into
things, and not just at the superficial level, but I am always thinking back to me and
how to improve things about my report and what’s wrong with it. Because it’s not just
about criticising, criticising, criticising, both ways you can learn. There [are] negatives
and positives in everyone’s report. So, definitely, it was a very good experience.

She seemed to be comparing her peers’ reports to her own, benchmarking her report
against her peers, checking where her report ‘fitted in’ and what she could improve
on. Peer assessment helped her benchmark her work and helped her see areas for
improvement: other studies have reported a similar effect (Cowan 2010). She was learn-
ing from reading and making judgements about her peers’ work; relating their work to
hers and thinking about how she could improve. She described writing comments as
‘joyful’, it seems to have been a very positive experience for her and one that she
learned a lot from ‘I felt that I’ve had a joyful experience’.
This positive experience of reading and comparing peers’ reports seemed to help
Ferdous develop her own understanding of quality and, I’m arguing, helped her to
develop ‘assessment literacy’ (Price et al. 2010, 288), as she began to experience the
process of making assessment judgements. Through this process of comparing and dis-
cussing the reports, and interpreting assessment criteria, Ferdous began to develop her
own understanding of what makes a good report in engineering, not just from listening
1502 T. McConlogue

to the explanations given by her tutor in class and in the rehearsal marking, but also by
marking a range of reports and starting to formulate her notion of quality, starting to
make and articulate marking judgements.
Before the peer assessment, Ferdous already held strong beliefs about quality in
report writing; these were beliefs she had gained from her previous experiences as a
learner. For example, she seemed to have strong beliefs about the structure of a
report, believing that she should discuss applications of polyethylene in the introduc-
tion, to provide the reader with a ‘preview’:

So, in my conclusion [ … ] I’ve basically given what I thought about what I’ve received
from the results that I’ve got, and I’ve said, ‘I think the polyethylene that I was testing is
low density polyethylene, because it fits in with the results’, and I didn’t actually go into
the applications of polyethylene. That, I thought, was general; that, I thought, was back-
ground information, so, I included it in the introduction as a preview, so that someone
could get a taste about why we’re doing this experiment. I think, personally, that’s how
I prefer structuring reports.

This is in contrast to the way some of her peers had structured their reports; some of
them had put information about applications in the conclusion, but Ferdous felt the
introduction was the best place to discuss applications; she saw it as a way of introdu-
cing the reader to the topic and stimulating interest in the topic. She explained why she
felt this was important:

… so that even someone who’s not an engineer could [ … ] think, ‘I see this is important’
… I felt it’s much more nice, it takes away the clinicalness and it gives someone a taster
and they would be interested in the report, they would want to carry on [ … ]. If the
introduction had, ‘this is what polyethylene is used in’. It felt more like a novel; it felt
sweet.

What is interesting here is Ferdous’ concern to communicate with the reader, to draw
them into the report by giving them something that they can easily relate to – appli-
cations. She understood that she was writing the ‘story’ of her experiment ‘it felt
more like a novel’ and derived obvious enjoyment from this ‘it felt sweet’.
Reading and commenting on a range of reports seemed to help Ferdous understand
how to improve her own work, helping her think about how she could move on and
develop her writing. However, the excellent report given to the class in the rehearsal
marking session seemed to have less impact. Ferdous explained that:

… it [the excellent report] is not something I could replicate, I’m not sure how many other
people could replicate it, certainly not if they left it last minute, you couldn’t replicate it
… I found that report quite complicated.

This suggests that if students are presented with work that is too removed from their
current understanding, they may not be at the stage in their learning where they can
make use of this work. The Vygotskyian (1978) idea of the instructional range may
be relevant here. If the excellent report falls outside of a student’s current understand-
ing, it has little impact on their learning as it falls outside their potential developmental
level. However, work that is close to or just above the student’s current understanding
helps students to move on. The common practice of giving students model answers may
make far less impact on their learning than showing a range of responses to a task and
supporting students to make judgements about those responses.
Studies in Higher Education 1503

Receiving peer feedback comments


The effect of receiving peer feedback comments is not widely researched and Kim
(2009) recommends that to better understand the experience of receiving feedback,
in-depth qualitative studies are needed. When I began this project, I conceived of
peer assessment as a way of providing students with timely, detailed feedback which
they could use to improve their next assignment. I was aware of concerns in the litera-
ture about the quality of peer assessor feedback but thought that with a thorough rehear-
sal marking session, and with sample feedback comments, peer assessors would be able
to give detailed feedback. So I didn’t anticipate the level of negative comments about
peer feedback, both in questionnaires and the focus groups. Ferdous’ interview com-
ments explain her reaction to peer feedback. While Ferdous’ experience of reading
reports and writing comments was ‘joyful’ her experience of receiving comments
was less positive. She was ‘dismayed’ when she received her peer assessors’ comments
because they did not tell her how to improve:

It’s not helpful to me when I’m doing future reports. The only help I would get is from me
knowing what I could have done better in my report, but it would be nice, as well, to have
a pool of other comments, looking at my report and saying to me, ‘well, this is what you’re
missing and this is where you can improve’.

Vu and Dall’Alba (2007) report that students found some of their peers’ comments
basic and that they were disappointed with the comments as they wanted suggestions
for revision.
As with tutor feedback, Ferdous encountered problems in trying to interpret her
peers’ feedback ‘“Suitability was not discussed”, what do they mean by that?’ While
she was dismayed overall and felt the advice from peers, at times, was wrong, she
still felt there were some things she could learn. Peer assessors commented on the
lack of graphs in her report, and she realised that she should have included graphs:

No, there are a few things I can learn from that. Because of the time that I had, I didn’t do
the graphs.

A more serious issue is Ferdous’ loss of trust in her peers. As a result of the peer assess-
ment feedback, she felt that her peers were not ‘dependable’ markers:

… because I know now that I can’t depend on them giving me the same comments that I
would give them, so, I can’t trust them. So I’ve kind of lost that trust in expecting students
to give me back proper marks.

Inaccurate feedback is an issue in peer assessment (Cho and Schun 2007); students can
feel uncertain about their peers’ comments, though this uncertainty may encourage
more student autonomy (Yorke 2003). Vu and Dall’Alba (2007) report that students
thought some peers’ feedback was ‘basic’ and the quality of comments varied.
Cartney (2010) describes the anger of her students who had spent considerable time
giving feedback, only to receive what they considered were poor quality comments
from peers who didn’t fully participate. Peer assessment puts students in the position
of having to make judgements about their peers’ work. Some students may resent
having power over fellow students or fellow students having power over them (Liu
and Carless 2006, 285). Other issues for peer assessors are friendship marking and
1504 T. McConlogue

students who resented what they saw as the ‘competition’ of peer assessment (Liu and
Carless 2006).
Helping students to develop as peer assessors seems to be a long-term endeavour.
Students (like markers) need considerable practice and seem to need time to develop
their expertise in the subject matter and sense of judgement of quality of an assignment.
I would argue that, initially, it’s better to withhold peer feedback comments until peer
assessors have developed some expertise in making judgements and can suggest ways
of improving. Initially withholding the first attempts at feedback would avoid some of
the problems discussed above, including loss of trust within the group. The focus then
could be on the composition of feedback rather than on receiving it. As students
develop expertise in the subject matter, and in giving feedback, comments could be
exchanged.

Conclusion
Students need to be able to judge the quality of their written work because it is only
through doing this that they can critique their work and move on. Peer assessment
can provide an opportunity for students to see a range of their peers’ work, to bench-
mark their own work and to articulate an understanding of good quality work. Peer
assessment can encourage ‘assessment dialogues’ (Carless 2006) with tutors and
their peers, such as the dialogue that can be generated in rehearsal marking sessions,
thus opening out the assessment process.
The process of composing peer feedback, of articulating those judgements, seemed
to help the peer assessor tracked in this study to critically judge aspects of her own work
and look for areas to improve. In making judgements about her peers’ work she was
drawing on not only the guidance and documents the tutor provided her with, but
also her own strong beliefs about how a report should be written. Seeing a range of
work seemed to help her assess her own work and seemed to be more effective in
helping her move on than an excellent ‘model answer’ which was beyond the level
she could reach at that point.
Supporting students to make judgements about their own and their peers’ work may
have more impact on student learning than traditional, transmissive tutor feedback.
However, this assumes that students will engage fully with peer assessment. In this
study, as in others, not all students fully engaged. Some of Ferdous’ peer assessors
wrote slight and uninformative feedback comments which led to distrust within the
group, damaging relationships. Nicol (2010, 14) claims that ‘ … the construction of
feedback is likely to heighten significantly the level of student engagement, analysis
and reflection with feedback processes. From this perspective, one might argue that
constructing feedback is at least as, if not more, beneficial than receiving it.’ For this
claim to be true, all students in a class need to actively engage and buy into the idea
of peer assessment.
The reasons for lack of engagement may be a belief in the tutor as expert marker, a
distrust of peers’ ability to judge, friendship marking, and resentment that students have
power to award marks. McConnell (2002) describes collaborative assessment where
students and tutors share power and are fully involved in the setting of the assignment
and the creation of the assessment criteria. Building trust in a group seems to be an
essential prerequisite for effective peer assessment. Perhaps empowering students
through involving them in the co-constructing of the assignment and assessment criteria
helps develop trust and build a supportive learning environment where students can
Studies in Higher Education 1505

practice ‘academic altruism’, giving their peers good quality feedback comments.
However, as this case study and other research suggest (Vu and Dall’Alba 2007;
Cartney 2010; Liu and Carless 2006) not all learners are ready to take on the role of
assessors. For these learners, or for groups that include learners who are distrustful
of peer assessment, a first step may be composing, rather than receiving, feedback com-
ments. As Ferdous’ case study suggests, composing feedback seems to be a more
powerful learning tool and one that students should experience.
The process of composing peer feedback has been a neglected aspect of peer assess-
ment; research in this field has tended to concentrate on reliability of peer marking and
comparisons of peer–tutor marking, often through quantitative research studies. More
qualitative research into the process of composing peer feedback is needed; we need to
better understand whether and how assessing and composing comments on a range of
work can help students develop ‘assessment literacy’ (Price et al. 2013).

Acknowledgements
I am indebted to Professor Julia Shelton of SEMS, Queen Mary University of London, for her
cooperation and insights during this research and to Sally Mitchell of Thinking Writing, at Queen
Mary, University of London for perceptive comments on a draft of this paper.

References
Adelman, C., D. Jenkins, and S. Kemmis. 1980. Rethinking case study: Notes from the second
cambridge conference. In Towards a science of the singular, ed. H. Simons, 47–61.
University of East Anglia, Centre for Applied Research in Education, Occasional
Publication, No. 10.
Bloxham, S., and A. West. 2007. Learning to write in higher education: Students’ perceptions of
an intervention in developing understanding of assessment criteria. Teaching in Higher
Education 12, no.1: 77–89.
Carless, D. 2006. Differing perceptions in the feedback process. Studies in Higher Education 31,
no. 2: 219–33.
Cartney, P. 2010. Exploring the use of peer assessment as a vehicle for closing the gap between
feedback given and feedback used. Assessment & Evaluation in Higher Education, 35, no.
5: 551–64.
Cho, K., and C.D. Schunn. 2007. Scaffolded writing and rewriting in the discipline: A web-
based reciprocal peer review system. Computers and Education 48: 409–26.
Cowan, J. 2010. Developing the ability for making evaluative judgements. Teaching in Higher
Education 15, no. 3: 323–34.
Denzin, N.K. 1989. Interpretative interactionism. Thousand Oaks, CA, London and New Delhi:
Sage.
Erlandson, D.A., Harris, E.L., Skipper, B.L., Allen, S.D. 1993. Doing naturalistic inquiry: A
guide to methods. Newbury Park, CA: Sage.
Falchikov, N. 2003. Involving students in assessment. Psychology Learning and Teaching 32,
no. 2: 102–8.
Gielen, S., E. Peeters, F. Dochy, P. Onghena, and K. Struyven. 2010. Improving the effective-
ness of peer feedback for learning. Learning and Instruction 20, no. 4: 304–15.
Giltrow, J., and Valiquette, M. 1994. Genres and knowledge: Students writing in the disciplines.
In Teaching and Learning Genre, ed. A. Freedman and P. Medway, 47–62. Portsmouth,
NH: Boynton/Cook.
Hamilton, D. 1980. Some contrasting assumptions about case study research and survey analy-
sis. In Towards a science of the singular, ed. H. Simons, 78–92. University of East Anglia,
Centre for Applied Research in Education, Occasional Publication No. 10.
HEFCE (Higher Education Funding Council for England). 2010. National student survey find-
ings and trends 2006 to 2009, Issues Paper. London: HEFCE. Available at http://www.
hefce.ac.uk/media/hefce1/pubs/hefce/2010/1018/10_18.pdf (accessed January 13, 2014).
1506 T. McConlogue

Kim, M. 2009. The impact of an elaborated assessee’s role in peer assessment. Assessment &
Evaluation in Higher Education 34, no. 1: 105–14.
Kvale, S. 1996. Interviews—an introduction to qualitative research interviewing. Thousand
Oaks, CA: Sage.
Lea, M.R., and B.V. Street. 1998. Student writing in higher education: An academic literacies
approach. Studies in Higher Education 23, no. 2: 157–72.
Lillis, T. 2003. Student writing as ‘academic literacies’: Drawing on Bakhtin to move from cri-
tique to design. Language and Education 17, no. 3: 192–207.
Lincoln, Y.S., and E.G. Guba. 1985. Naturalistic inquiry. Beverley Hills, CA: Sage.
Lincoln, Y.S., and E.G. Guba. 2000. Paradigmatic controversies, contradictions and emerging
influences. In The Handbook of Qualitative Research, ed. N.K. Denzin and Y.S. Lincoln,
163–88. 2nd ed. Thousand Oaks, CA, London and New Delhi: Sage.
Liu, N.F., and D. Carless. 2006. Peer feedback the learning element of peer assessment.
Teaching in Higher Education 11, no. 3: 279–90.
McConlogue, T. 2012. But is it fair? Developing students’ understanding of grading complex
written work through peer assessment. Assessment & Evaluation in Higher Education,
37, no. 5: 113–23.
McConnell, D. 2002. The experience of collaborative assessment in e-learning. Studies in
Continuing Education 24, no. 1: 73–92.
Nicol, D. 2010. From monologue to dialogue: Improving written feedback processes in mass
higher education. Assessment & Evaluation in Higher Education 35, no. 5: 501–17.
Nicol, D. 2011. Developing students’ ability to construct feedback. Presented at 8th Annual
Enhancement Themes Conference, March 2–3, Heriot Watt University. Available at
http://www.enhancementthemes.ac.uk/resources/publications/graduates-for-the-21st-century
(accessed January 13, 2014).
Northedge, A. 2003. Rethinking teaching in the context of diversity. Teaching in Higher
Education, 8, no. 1: 17–32.
Patchan, M.M., D. Charney, and C.D. Schunn. 2009. A validation study of students’ end com-
ments: Comparing comments by students, a writing instructor, and a content instructor.
Journal of Writing Research 1, no. 2: 124–52.
Polanyi, Michael. 1964 [1958]. Personal knowledge: Towards a post-critical philosophy.
New York: Harper Torchbooks.
Price, M., K. Handley, J. Millar, and B. O’Donovan. 2010. Feedback: All that effort, but what is
the effect? Assessment & Evaluation in Higher Education 35, no. 3: 277–89.
Price, M., C. Rust, B. O’Donovan, K. Handley, and R. Bryant. 2013. Assessment literacy: The
foundation for improving student learning. Oxford: The Oxford Centre for Staff and
Learning Development.
Sadler, D.R. 2009. Indeterminacy in the use of preset criteria for assessment and grading.
Assessment & Evaluation in Higher Education 34, no. 2: 159–79.
Sadler, D.R. 2010. Beyond feedback: Developing student capability in complex appraisal.
Assessment & Evaluation in Higher Education 35, no. 5: 535–50.
Sadler, D.R. 2014. Learning from assessment events: The role of goal knowledge. In Advances
and innovations in university assessment and feedback: A festschrift in honour of Professor
Dai Hounsell, ed. C. Kreber, C. Anderson, N. Entwistle, and J. McArthur. Edinburgh:
Edinburgh University Press.
Stake, R.E. 1995. The art of case study research. Thousand Oaks, CA: Sage.
Simons, H. 1996. The paradox of case study. Cambridge Journal of Education 26, no. 2:
225–40.
Vu, T.T., and Dall’Alba, G. 2007. Students’ experience of peer assessment in a professional
course. Assessment & Evaluation in Higher Education 32, no. 5: 541–56.
Vygotsky, L.S. 1978. Mind in society: The development of higher psychological processes.
Cambridge, MA: Harvard University Press.
Wen, M.L., and C.-C. Tsai. 2008. Online peer assessment in an inservice science and mathemat-
ics teacher education course. Teaching in Higher Education 13, no. 1, 55–67.
Yin, R.K. 2009. Case study research: Design and methods. Thousand Oaks, CA: Sage.
Yorke, M. 2003. Formative assessment in higher education: Moves towards theory and the
enhancement of pedagogic practice. Higher Education 45: 477–501.
Copyright of Studies in Higher Education is the property of Routledge and its content may not
be copied or emailed to multiple sites or posted to a listserv without the copyright holder's
express written permission. However, users may print, download, or email articles for
individual use.

You might also like