You are on page 1of 17

Assessment & Evaluation in Higher Education

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/caeh20

Exploring teacher perceptions of different types


of ‘feedback practices’ in higher education:
implications for teacher feedback literacy

Cecilia Ka Yuk Chan & Jiahui Luo

To cite this article: Cecilia Ka Yuk Chan & Jiahui Luo (2021): Exploring teacher perceptions of
different types of ‘feedback practices’ in higher education: implications for teacher feedback literacy,
Assessment & Evaluation in Higher Education, DOI: 10.1080/02602938.2021.1888074

To link to this article: https://doi.org/10.1080/02602938.2021.1888074

Published online: 10 Mar 2021.

Submit your article to this journal

Article views: 357

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=caeh20
Assessment & Evaluation in Higher Education
https://doi.org/10.1080/02602938.2021.1888074

Exploring teacher perceptions of different types of


‘feedback practices’ in higher education: implications for
teacher feedback literacy
Cecilia Ka Yuk Chan and Jiahui Luo
University of Hong Kong, Hong Kong

ABSTRACT KEYWORDS
Universities around the world are encouraging teachers to provide more Feedback; higher
constructive feedback to support student learning, but do teachers know education; teacher
how to distinguish constructive feedback? What pedagogical practice is perceptions; feedback
considered as feedback and what is not? For example, is a rubric a type literacy; feedback
purpose
of feedback? To date, very limited research has answered these questions
from university teachers’ perspectives and the current study aims to
address this gap. In this study, ten teacher training workshops were
conducted in a university in Hong Kong, with an intention to enhance
teachers’ competence in assessment and feedback. During the workshops,
we first adopted Poll Everywhere to survey whether teachers (N = 248)
recognise six types of common pedagogical practices as feedback, and
subsequently used this as a base to discuss with teachers the reasons
behind their responses. Findings reveal teachers’ varied perceptions of
these practices as feedback, which may be related to their varied under-
standings of feedback purposes. The paper calls for an explicit acknowl-
edgement of the multiple purposes of feedback, and concludes with
implications for teacher feedback literacy in higher education.

Introduction
For many years, researchers and practitioners have endeavoured to study what constitutes good
feedback (Nicol & Macfarlane-Dick, 2006; Dawson et al. 2019), how to utilise and engage with
feedback (Price, Handley, and Millar 2011; Carless and Boud 2018), and how, eventually, to make
feedback most beneficial to student learning. We have come a long way since feedback was
merely known as ‘something given by teachers’ to a process ‘driven by the student rather than
the educator… making use of information to effect change’ (Dawson et al. 2019, p. 26).
While the field of feedback research is burgeoning, the literature seems to be vague about
a fundamental question on actual feedback practices; i.e. among a wide range of pedagogical
practices, what practice is considered as feedback and what is not? One of the most widely
cited feedback definitions came from Hattie and Timperley (2007), arguing that feedback is
‘information provided by an agent (e.g. teacher, peer, book, parent, self, experience) regarding
aspects of one’s performance or understanding’ (p.81). However, it is not entirely clear about
what they meant by ‘information’. If this information does not directly contribute to student
learning, can it still be considered as feedback? Are there any requirements for the ‘amount’

CONTACT Cecilia Ka Yuk Chan Cecilia.Chan@cetl.hku.hk


© 2021 Informa UK Limited, trading as Taylor & Francis Group
2 C. K. Y. CHAN AND J. LUO

and ‘quality’ of information for it to be considered as feedback? To date, no such guidelines


exist to pinpoint what practices should be regarded as feedback and what should not.
Instead of investigating the ‘threshold’ for practices to be recognised as feedback, current
research tends to understand and characterise feedback in its most ideal form. Abundant studies
have provided models and long feature lists of effective feedback, inculcating that feedback
should be personal and individualised (Ferguson 2011; Li and De Luca 2014), detailed and
specific (Ferguson 2011; Dawson et al. 2019), timely (Gibbs and Simpson 2004; Poulos and
Mahony 2008), criteria-referenced (O’Donovan, Price, and Rust 2001; Poulos and Mahony 2008),
constructive (i.e. pointing to future improvement) (Lizzio and Wilson 2008; Dawson et al. 2019),
encourage positive beliefs and facilitate self-assessment in learning (Nicol & Macfarlane-Dick,
2006). These studies are valuable, but in reality it is often difficult to satisfy all these features
in every feedback provision. The question is – are all the above features essentials in a feedback
practice? For example, if the ‘feedback’ is not detailed or personal, is it still feedback? If the
‘feedback’ is detailed but not constructive, is it still feedback?
So far, very limited research has attempted to answer the above questions from teachers’
perspectives. Understanding how teachers distinguish feedback practices is important, because
this will influence how they actually design and deliver feedback in practice and will speak to
teachers’ development of feedback literacy.
To address the research gap, this study first reports 248 university teachers’ perceptions of
six common pedagogical feedback practices in terms of their eligibility as feedback, and sub-
sequently uses this as a base to discuss the criteria adopted by teachers to define feedback.
In what follows, we will first introduce two potential reasons that prevent a consistent and
clear-cut definition of what constitutes feedback, and then review studies on teacher feedback
perceptions. Basic features of the six ‘feedback’ practices under research will also be outlined.

Feedback as an evolving and complicated construct


The meaning of feedback has changed over recent decades, making it difficult to achieve a con-
sistent understanding of what constitutes feedback. In general, three paradigms have contributed
to how we define and understand feedback – (1) teacher-centred transmission-oriented paradigm;
(2) student-centred process-oriented paradigm; and (3) ecological/sociomaterial paradigm.
Conventionally, feedback is understood as an ‘end product’, namely ‘knowledge of results’ or
‘correction of errors’ (Gibbs and Simpson 2004, p. 17). As the traditional focus was on the
information delivered to students, there has been much discussion on how teachers deal with
feedback (Hattie and Timperley 2007). Teachers’ provision of feedback is often found entangled
with a range of issues, including increased workload, limitations in time, difficulties in catering
for individual students in large classes, and a lack of motivation and expertise (Gibbs and
Simpson 2004; Henderson, Ryan, and Phillips 2019).
In the past decade, this conventional way of seeing feedback as unidirectional transmission to
students has developed towards a more student-centred and sustainable model (Carless et al.
2011). In this sustainable model, feedback involves a dialogic process in which students take
increased responsibility in seeking and acting on feedback (Sadler 2010; Chan and Luo 2020). The
effectiveness of feedback depends on students’ feedback literacy as well, i.e. their ability to appre-
ciate feedback, make judgements, manage emotions, and take action (Carless and Boud 2018).
More recent understandings operate from ecological and sociomaterial perspectives. Chong
(2021) interpreted feedback via an ecological lens and argued for a greater acknowledgement
of the messy and situated nature of feedback. According to Chong (2021), meaningful student
engagement with feedback is contingent upon two sets of factors: contextual factors (e.g.
cultures; feedback modes; teacher-student relations) and individual factors (e.g. student learning
goals, feedback beliefs). Gravett (2020), on the other hand, foregrounded ‘social, material, spatial
and temporal actors’ which have long been regarded merely as the ‘backdrop’ in feedback
Assessment & Evaluation in Higher Education 3

activities (p. 9). Even if teacher feedback is optimal and students are literate in feedback pro-
cesses, factors such as space (e.g. students feeling more relaxed to discuss feedback with teachers
in libraries or cafés), time (e.g. lack of time to attend office hours) and artefacts (e.g. technol-
ogy-afforded feedback) would still influence student engagement with feedback.
These studies have supported feedback as an evolving and complicated construct. Therefore,
when it comes to feedback provision, teachers often have to navigate through a complex
landscape and make compromises if needed (Carless and Winstone 2020). However, not much
research has revealed teachers’ perceptions of feedback practices.

Multiple purposes of feedback


Another reason precluding a consistent understanding of feedback concerns its multiple pur-
poses. In general, it is agreed that the purpose of feedback in education is to improve student
learning (Boud and Molloy 2013; Carless and Boud 2018), but this perception risks being too
broad as it does not clarify what student improvement feedback should be targeting. Price and
colleagues (2010, p. 278) called feedback ‘a generic term which disguises multiple purposes
which are often not explicitly acknowledged’.
Some studies have shed light on feedback’s multiple purposes. Nelson and Schunn (2009)
summarized three main meanings of feedback, namely motivational, reinforcement and infor-
mational. In Hattie and Timperley’s (2007) conceptual paper, the authors proposed four different
focuses of feedback. First, task feedback targets the quality of the end products, including
corrections. Second, process feedback is aimed at the process of completing the task. Third,
self-regulation feedback focuses on how students monitor their own learning, and the fourth,
personal feedback, is directed to the ‘self’, including comments such as ‘You are a great student’.
A more recent study comes from Dawson and colleagues (2019) based on 406 staff and 4514
student surveys. Dawson et al. also reported four broad categories of feedback in terms of their
purposes, i.e. ‘identify strengths and weaknesses’, ‘affective’, ‘justify grades’ and ‘improve’. Feedback
used to ‘identify strengths and weaknesses’ is similar to the conventional model of feedback
where students are told what is good and bad about their work instead of how to improve.
‘Affective’ feedback is to acknowledge student efforts and motivate them, whereas ‘justify grades’
feedback is used to explain students’ grades. Under the ‘improve’ category, apart from ‘unspec-
ified improvement’, Dawson et al. listed five foci, i.e. ‘improvement in work’, ‘improvement in
understanding (such as standards and learning objectives)’, ‘improvement in grades’, ‘improvement
in study strategy’ and ‘improvement in reflection, self-evaluation or critical thinking’.
Judging from these studies, there is a lack of unanimity on the multiple purposes of feedback
– scholars used different research methods and standards to study feedback purposes. To avoid
further confusion, we will adopt Dawson et al.’s (2019) terms when referring to feedback pur-
poses in latter sections.

Teacher feedback perceptions and feedback literacy


Teacher perceptions matter because they are significant contributors to the actions teachers
take (Pajares 1992). Teacher feedback perceptions are reflective of their feedback literacy, i.e.
‘teachers’ knowledge, expertise and dispositions to design feedback processes in ways which
enable student uptake of feedback and seed the development of student feedback literacy’
(Carless and Winstone 2020, p.4). Carless and Winstone identified three dimensions of teacher
feedback literacy, including ‘designing for uptake’, ‘relational sensitivities’ and ‘managing
practicalities’.
However, few studies have explicitly explored teachers’ perceptions of what practices should
be considered as feedback. One relevant study, conducted by Irving and colleagues (2011),
4 C. K. Y. CHAN AND J. LUO

interviewed 11 New Zealand secondary school teachers about their conceptions of assessment
and feedback. Findings showed that the teachers recognised three types of feedback: informa-
tion about learning, grades/marks and comments on student behaviour/effort. Another relevant
study was conducted in a similar context but with a quantitative design. Brown, Harris, and
Harnett (2012) surveyed 518 primary and secondary school teachers in New Zealand regarding
their feedback conceptions. The study found that teachers tended to accept practices that
improve learning as feedback, but did not believe in ‘feedback’ that simply enhances students’
well-being (e.g. praise for enhanced self-esteem).
It should be noted that both studies were based on primary/secondary school contexts and
the findings may not translate completely to higher education. Besides, as outcome-based
education and self-regulated learning gains growing attention in recent years, these studies did
not capture practices that emerge as central against this context (e.g. the use of rubrics and
exemplars as feedback).

Six types of pedagogical practices as ‘feedback’


In this section we review six common types of pedagogical practices that are under investigation
in this study. Rationales for selecting these practices will be provided later.
Stamps/Digital badges: Digital badges are ‘electronic symbols used to document performance
and achievement’ (Carey and Stefaniak 2018, p.1211). They are widely used as a credentialing
mechanism to document student learning (Willis, Flintoff, and McGraw 2016). Stamps are similar
to digital badges, except that they are used as a tool on paper and are widely adopted in
primary and secondary school education. By grouping them together, we refer to badges both
in virtual space and are paper-based. There are debates concerning for what purpose badges
are most suitable (e.g. skill recognition, assessment), and recently some researchers have noted
their potential to generate feedback opportunities. McDaniel and Fanfarelli (2015) argued that
when badges offer information about students’ performance, they move beyond simple extrinsic
reward into feedback. Carey and Stefaniak (2018), on the other hand, believed badges should
best perform as motivational summative feedback.
Grades: Grades are often presented in the form of letters (A/B/C) and numbers for exam-
inations and assignments. It is debatable whether feedback is used to justify grades, or grades
themselves can be seen as a type of feedback which not only informs students’ position and
attainment, but leads to learning progress by influencing students’ motivation and performance
(Lipnevich & Smith, 2014). Grades and feedback are often viewed dichotomously in research,
with some researchers reporting that students are ‘grade-oriented’ and do not care about feed-
back (Carless 2006; Irving, Harris, and Peterson 2011). In Perera et al.’s (2008) study surveying
over 400 students, students stated that the provision of grades is inadequate as feedback and
called for more teacher-student dialogues to clarify issues.
Simple corrections: By simple corrections, we mean corrections of right or wrong (without
further comments) made to students’ work/performance to indicate where the mistakes are
made. In second language learning research, simple corrections have been referred to as ‘cor-
rective feedback’ – a well-researched practice generally considered effective in language learning
(Ellis 2006; Mackey et al. 2007). However, it is also argued that simple corrections reinforce the
authoritative status of academics (Ivanič, Clark, and Rimmershaw 2000) and they are often not
considered ideal, compared to detailed comments, in feedback literature (Dawson et al. 2019).
Rubrics: A rubric is ‘a grid of assessment criteria describing different levels of performance
associated with clear grades’ (Reddy and Andrade 2010, p. 435). Some researchers warned that
rubrics may restrict students and undermine their creativity (Bell, Mladenovic, and Price 2013;
Gezie et al. 2012) There are risks that students will see rubrics as merely checklists to achieve
a grade (Torrance 2007), and too detailed rubrics will reduce students’ autonomy to act
Assessment & Evaluation in Higher Education 5

independently (Boud and Falchikov 2006). That said, rubrics suit well the preference for ‘crite-
ria-based’ feedback in literature, and are powerful in making learning tasks transparent as well
as engendering deep learning (Nordrum, Evans, and Gustafsson 2013; Cheng and Chan 2019).
Comments to the whole class: Comments to the whole class refer to feedback provided to
the whole class which does not necessarily relate to individuals. On one hand, such general
comments enable students to learn from other students’ work and reduce teachers’ workload
(Race 2005; Wang, Yu, and Teo 2018); on the other hand, these comments lack individuality and
may be difficult for individual students to act upon (Race 2005; Poulos and Mahony 2008).
Generic exemplars: Exemplars are ‘carefully chosen samples of student work which are used
to illustrate dimensions of quality and clarify assessment expectations’ (Carless and Chan 2017,
p. 930). While some scholars believe exemplars allow room for students to construct their own
learning by encouraging students to self-examine their work (Scoles, Huxham, and McArthur
2013; Chong 2019), others note plenty of issues in using exemplars as feedback. For example,
Dawson et al. (2019, p.31) pointed out that exemplars do not ‘fit within everyday educator or
student definitions of feedback’ and hence are not commonly recognized as feedback. Students
may also plagiarize exemplars in their work or find it challenging to accurately evaluate and
make use of exemplars for future improvement (Carless and Chan 2017). That said, similar to
rubrics, generic exemplars have the potential to improve students’ understanding of standards
and their self-evaluation.

Current study
The literature review highlights difficulties to provide a clear-cut guideline on what constitutes
feedback and a lack of studies on what university teachers perceive acceptable as feedback in
a complex educational landscape. In response to the research gap, the overarching goal of this
study is to understand:

How do university teachers distinguish a pedagogical practice as feedback?

To address this goal, the study will rely on teachers’ perceptions of the six types of peda-
gogical practices, i.e. whether these practices can be regarded as legitimate forms of feedback.

Method
Research background
The research is based on 10 professional development workshops for university teachers on
feedback practice (approximately 20-30 participants each time), conducted from November,
2018 to May, 2020 in a comprehensive university in Hong Kong. The goal of the workshop is
to help teachers develop knowledge and skills for assessment and feedback, familiarize them
with the policies of assessment and feedback in this university, as well as to provide them
with a platform to discuss with other colleagues their assessment practice. The leading facili-
tator of the workshop (also first author of the paper) is an expert in professional development
in that university with over fifteen years of experience in mentoring teachers in higher education.
In what follows, we will introduce the four phases for conducting this research.

Phase 1: selection of the 6 types of pedagogical practices


There is no standardized list of what practices are acceptable as feedback, and teachers usually
have a high degree of autonomy to decide what they consider as feedback and how they
deliver feedback. Then it becomes important to understand how teachers navigate their
6 C. K. Y. CHAN AND J. LUO

provision of feedback among a wide range of pedagogical practices in higher education. To


elicit teachers’ opinions on this aspect, we need examples to prompt teachers elaborate on
their understandings of feedback.
Selection of the six ‘feedback’ examples was based on the following standards: (1) they should
be common pedagogical practices in higher education that most teachers have experience with
and can relate to; (2) to stimulate discussions, these practices’ effectiveness as feedback is often
questioned in the literature; (3) the focus is on the ‘content’ of this practice rather than the
‘mode’ (e.g. audio and video feedback) or the ‘source’ (e.g. peer and self-feedback). Following
these standards, before the 10 workshops, the first author solicited opinions from teachers
(approximately N = 50) in her previous workshops, resulting in the six ‘feedback’ types.
Subsequently the authors consulted literature on these ‘feedback’ types and a further discussion
was conducted within the authors’ research team consisting of six education researchers, which
confirmed the six types of ‘feedback’.

Phase 2: the pilot


Before the formal data collection, pilot studies were conducted in two teacher workshops facil-
itated by the first author (N = 67 in total). The goal was to test out the research protocols so
the final plan could be adjusted and refined. In the pilot, paper-based surveys were distributed
to participants to inquire whether they considered the six pedagogical practices as feedback,
and a discussion followed after the teachers completed the surveys.
Three issues were gleaned from the pilot for improvement in the formal research. First, it
was found that not all teachers were entirely clear about what is meant by ‘exemplars’ and
‘digital badges’, hence it was necessary to introduce the six pedagogical practices to teachers
before they filled in the survey. Second, as it is not possible to demonstrate the results instantly
with paper surveys, we considered the use of classroom response interactive platform (i.e. Poll
Everywhere) in the formal study to facilitate discussion. Third, due to time limit, not all teachers
had the opportunities to voice their opinions in the discussion, hence an open-ended question
was added for teachers to explain their responses in the classroom response platform.

Phase 3: the formal study


Participants: In total, 294 teachers from over 10 different disciplines participated in the 10
workshops. Information about the workshops was advertised on the university website and sent
to the teachers via mass email for registration one month before the workshop. These workshops
are part of the obligatory professional development courses for teachers new to the university,
but also open to all teaching and research staff at the university. At the start of each workshop,
teachers were informed of the current study and asked to sign informed consent if they agreed
to participate. All registrants gave their consent to this research.
In general, the workshop participants included both early and middle-career stage teachers
who undertake different positions at the university (e.g. teaching assistant, lecturer, assistant
professor). The participant teachers are new to the university (less than three years of experi-
ence), but they are not necessarily novice teachers as they may have undertaken positions in
other universities for some years. Table 1 presents the demographics of the teachers.
Data collection: Data collection involved using the survey format from the classroom response
system followed by discussions among teachers. In each workshop, basic theories and concep-
tualizations of feedback were first introduced to the teacher participants. To familiarise the
teachers with the six pedagogical practices, the facilitator defined each of them with concrete
examples (e.g. using pictures and cases to show what is meant by ‘rubric’s). Before engaging
teachers in the survey, the facilitator also double-checked with the teachers whether they
Assessment & Evaluation in Higher Education 7

Table 1. Teacher demographics.


Position N Disciplines N
Associate Professor 9 Faculty of Architecture 14
Assistant Professor 116 Faculty of Arts 59
Research Assistant Professor 17 Faculty of Business and Economics 16
Assistant Lecturer 33 Faculty of Dentistry 2
Teaching Assistant 46 Faculty of Education 22
Tutor 6 Faculty of Engineering 21
Lecturer 47 Faculty of Law 7
Senior Lecturer 9 Faculty of Science 32
Research Assistant 6 Faculty of Social Sciences 42
Post-doctoral Fellow 3 Faculty of Medicine 71
Others 2 Others 8
Total 294 Total 294

understand the six practices; questions such as ‘Before we proceed, are there any questions
about the six practices?’ were asked.
The facilitator then used Poll Everywhere (a web-based classroom response application
enabling the audience to respond directly on their mobile phones to questions shown on the
screen) to survey whether participants considered the six types of pedagogical practices as
feedback. The survey contained six questions (i.e. ‘do you consider stamps/grades/corrections/
rubrics/comments to the class/exemplars as feedback?’). Each question has three choices: Yes,
No and Depends. Following each of the above multiple-choice questions, there was also an
open-ended question for teachers to explain their responses (i.e. ‘Please enter your reasons for
the above survey choice – why do you (not) consider it as feedback?’).
With Poll Everywhere, the survey results were immediately shown on the screen after the
teachers finished the survey. Facilitated by the facilitator, the teachers then openly discussed
and justified their responses (i.e. whether they considered the six types of practices as feedback
and why). No correct or wrong answers were predestined, and the discussion attempted to be
open and respectful.
In sum, 248 workshop participants responded to the survey. The discussion in each workshop
lasted around half an hour, conducted in English. Two research assistants took notes and
recorded the discussions.

Phase 4: data analysis


For the survey data, as the study does not aim to provide inferential results, only descriptive
analysis was performed. Discussion data from all workshops and the open-responses in surveys
were transcribed and synthesized in Excel.
We conducted a thematic analysis on the qualitative data, and captured both the semantic
and latent themes (Braun and Clarke 2006). Semantic themes are based on what the participants
have explicitly said or written down, which are descriptive and only identify surface meaning.
Latent themes are the underlying ideas and beliefs that shape or inform the semantic content.
Braun and Clarke compared research data to an ‘uneven blob of jelly’ and explained that semantic
themes describe the surface and form of the jelly, whereas latent themes identify the factors
that contribute to that particular jelly form.
Following Braun and Clarke’s suggestions, we first identified the semantic themes under each
of the six feedback types, i.e. what the teachers explicitly wrote or said about these feedback
practices. Subsequently, latent themes were developed through an iterative reading and exam-
ination of the data and the semantic themes. The following questions assisted us in developing
the latent themes: why did teachers consider this (the semantic content) as an important factor
when distinguishing feedback? What is the feedback belief underlying their feedback decisions?
Are there any shared patterns we can identify? Table 2 offers an example of the semantic and
latent themes identified in the case of ‘stamps/digital badges’.
8 C. K. Y. CHAN AND J. LUO

Table 2. Example of semantic and latent themes.


Why stamps/digital badges should (not) be considered as feedback?
Semantic themes Latent themes Explanatory Notes
[What teachers have explicitly [Feedback beliefs underpinning teachers’
said or written down perceptions of stamps/badges]
about stamps/badges]
• Motivating/encouraging • Feedback serves affective purpose Semantic: teachers who supported
students (✔) stamps/badges as feedback emphasized
that stamps/badges can motivate and
encourage students.
Latent: teachers who recognised the
affective purpose of feedback tend to
accept stamps/badges as feedback.
• Not specific enough for • Feedback should serve improvement Semantic: teachers who voted against (or
students to act on or make purpose were unsure of ) stamps/badges as
improvements (✘) feedback argued that stamps/badges
• Too juvenile to have any cannot guide students to make
real impact on student concrete improvement.
learning (✘) Latent: these teachers believed feedback
should serve improvement purpose
– this is essential and must be fulfilled
for any pedagogical practice to be
considered as feedback.

Table 3. Descriptive statistics of feedback survey.


Do you consider the following as feedback?
Valid*
Item Yes No Depends Valid total Missing Total
Stamps/digital 81 71 68 220 28 248
badges (36.8%) (32.3%) (30.9%) (100.0%)
Grades 121 59 40 220 28 248
(55.0%) (26.8%) (18.2%) (100.0%)
Simple corrections 158 27 26 211 37 248
(74.9%) (12.8%) (12.3%) (100.0%)
Rubrics 136 50 31 217 31 248
(62.7%) (23.0%) (14.3%) (100.0%)
Comments to the 187 12 25 224 24 248
whole class (83.5%) (5.3%) (11.2%) (100.0%)
Exemplars 149 35 20 204 44 248
(73.0%) (17.2%) (9.8%) (100.0%)
*
Valid percentages are reported in the findings.

Findings
Findings show teachers’ perceptions of feedback are closely associated with their understanding
of feedback purposes (i.e. what feedback should be able to achieve). In this section, we will
first present an overview of the survey data to illustrate the extent to which these practices
are regarded as legitimate forms of feedback. Subsequently, we will report teachers’ open
responses to each of the six practices with specific reference to feedback purposes.

Survey overview
Survey results show the majority of teachers (over 50%) considered grades, rubrics, simple
corrections, comments to the whole class and generic exemplars as feedback. As many as 83.5%
of the teachers (N = 187) agreed that comments to the whole class should be counted as feed-
back. Table 3 presents a summary of survey findings.
Assessment & Evaluation in Higher Education 9

Stamps/digital badges
Responses to this category are varied. 36.8% of the teachers (N = 81) believed stamps/digital
badges are feedback, while 32.3% (N = 71) voted the opposite. Another 30.9% (N = 68) argued
it is ‘dependent’.
For teachers who distinguished stamps/badges as feedback, they emphasised their ‘affec-
tive’ purpose to positively influence students (McDaniel and Fanfarelli 2015; Carey and
Stefaniak 2018). Teachers believed that students ‘like that little game’, stamps/badges are
very useful tools ‘for cheering up’, and provide ‘information on achievement’. Teacher 01
commented:

I think it is. Students need encouragement. They lack confidence, a lot of them, even something like
‘well done’ is a start – we should not stop there. Students brighten up with something like that.

On the other hand, teachers who voted against stamps/badges as feedback argued that they
do not satisfy the ‘improvement’ purpose of feedback. Some teachers criticized stamps/badges
for being too ‘juvenile’ and they ‘infantilise’ college students. These teachers mentioned that
stamps/badges are not detailed or constructive enough, so they may ‘not be helpful for students’
future learning’. As Teacher 18 pointed out,

Sometimes it is quite difficult for you to transmit what you want to say with a stamp. It is not
specific enough.

For teachers voting ‘dependent’ in the survey, a key condition is that whether stamps/badges
are used as the only feedback for the assignment. They acknowledged the ‘affective’ purpose
of stamps/badges as feedback, but also believed stamps/badges should be accompanied by
other feedback to ‘let students know what aspects are done well’ (i.e. the ‘improvement’ purpose
of feedback).

Grades
As one of the most common practices in education, grades are considered as feedback by
55.0% of the participant teachers (N = 121). 26.8% (N = 59) voted ‘No’ and 18.2% (N = 40) voted
‘It depends’.
The most frequent reason used to support grades as feedback is how they serve the ‘improve-
ment’ purpose of feedback by helping students understand their positioning (Lipnevich & Smith,
2014). Teachers argued that grades ‘allow student to have an idea of their abilities’ and ‘represent
where you are (on the scale)’. Teacher 04 considered grades as one of the most ‘conventional’
types of feedback as they are ‘measurable and quantifiable, very easy (for students) to see the
top score and mean score’.
Some teachers explained that their affirmative response was due to students’ preference.
They noted that ‘most students think grades are feedback’ and ‘they like this’. Teacher 03 said
‘some students will think a grade is feedback especially if it is a good grade’. These teachers
believed students have a large say in deciding what constitutes feedback as they are the direct
recipients. Such a proposition is not exclusive to the participants in this study. In Carless’ (2006)
research on Hong Kong teachers and students’ perceptions of feedback, some teachers also
thought ‘students are interested in their marks and grades only’ (p. 224).
Teachers who disagreed that grades are feedback were concerned about the unclear boundary
between assessment and feedback. They believed grades are assessment instead of feedback,
because ‘feedback would be what they’ve (students) done wrong, what they’ve done well’,
pointing to the ‘identify strengths and weaknesses’ feedback purpose in Dawson et al.’s (2019)
research. Teachers indicated that ‘students don’t know how to improve’ and grades serve mainly
as the ‘outcome’. Another group of teachers commented that grades may also be counted as
feedback if ‘the grade is explained’ or if ‘students are given grading descriptors’.
10 C. K. Y. CHAN AND J. LUO

Simple corrections
The majority of teachers (74.9%, N = 158) perceived simple corrections as feedback (12.8% [N = 27]
voted for ‘No’, and 12.3% [N = 26] for ‘Depends’).
Most teachers acknowledged simple corrections as feedback considering how corrections
fulfil part of the feedback purposes to identify students’ weaknesses (Dawson et al. 2019). They
argued that corrections help students ‘identify their weaknesses’ so they ‘do not make the same
mistake again’. Corrections are also believed to be ‘alarming students to refine their work’, hence
point to the enhancement of learning. These arguments mainly pertain to the role of simple
corrections as ‘corrective feedback’ in second language education research (Ellis 2006), high-
lighting their function to correct grammatical mistakes.
However, for teachers who are concerned about the constructiveness of these corrections
to be considered as feedback, they tended to prioritise the ‘improvement’ purpose of feedback
over the others. Apart from pointing out errors, feedback should incorporate both ‘qualitative
and quantitative feedback at the same time so students know how they are performing and
know what areas they can and should improve on, and know where they are and what they
did’. For example, Teacher 23 emphasized that feedback should provide useful information to
direct students’ future performance.
In addition, two teachers raised socio-emotional issues in feedback, touching upon the
‘affective’ purpose they expected in feedback practice:

No. I think the feedback like this is cold. It is not encouraging; it just pointed out little things that
students did not do well.

Rubrics
For rubrics, over half of the participants (62.7%, N = 136) considered them as feedback, whereas
23.0% (N = 50) and 14.3% (N = 31) voted ‘No’ and ‘It depends’ respectively.
Teachers who saw rubrics as a type of feedback believed they ‘provide clear guidelines for
student learning’ which will ‘lead to improvement’. These viewpoints mainly speak to the ‘improve-
ment in self-evaluation’ purpose of feedback (Nordrum, Evans, and Gustafsson 2013). Other
teachers also mentioned there are ‘specifics’ in the rubrics so students are expected to know
‘where they are standing’. Rubrics are considered ‘standardised and objective’ feedback. In this
sense, rubrics also fulfil the ‘improvement in understanding standards’ purpose as proposed by
Dawson et al. (2019).
The effectiveness of rubrics as feedback stimulated heated discussions among the teachers.
Some teachers argued that it depends ultimately on whether the students are able to make
use of rubrics for future improvement. However good the rubric is designed, the ‘structures’ in
rubrics may not always translate well to students. The issue of feedback literacy and student
agency was raised:

I think it depends on whether the students read and engage with the rubrics and whether the
teachers make sure the students understand the rubrics.

Other teachers proclaimed that rubrics are often ‘too general and abstract’ and need ‘further
explanation’ to make them become constructive ‘feedback’. For these teachers, rubrics only serve
to supplement teacher feedback, but cannot be considered as feedback themselves.

Comments to the whole class


83.5% of the teachers perceived comments to the whole class as feedback (N = 187), leaving
only 5.3% (N = 12) and 11.2% (N = 25) who disagreed or felt unsure. When teachers judged
whether whole-class comments can be regarded as feedback, they considered how the ‘improve-
ment’ purpose of feedback can be fulfilled.
Assessment & Evaluation in Higher Education 11

Comments to the whole class are considered as feedback because they ‘highlight common
things to note among students’ and enable students to ‘consider others’ work’. Some teachers
commented that as ‘sometimes the whole class have made similar errors or have difficulties in
similar areas’, comments to the whole class become helpful, constructive and save class time.
Still, other teachers believed that it is important that feedback is ‘personalised’ instead of
‘general’. As noted by Poulos and Mahony (2008), there is no guarantee that comments to the
whole class will speak to individual needs, hence the usefulness is compromised. Some teachers
mentioned that:

Unless it is specific for a particular student, students sometimes may ignore general comments in
class as they do not think teachers are addressing to them.

Therefore, some teachers particularly noted that teachers need to make explicit to students
that these comments are indeed ‘feedback’. These teachers believed if the comments are not
taken up by students, comments cannot fulfil any purpose expected in the feedback practice.

For students, they often think that comments to the class are just part of the lecture – there should
be ways to explain to them that this is feedback, to stand out that this is important and this is
feedback.

Generic exemplars
In terms of generic exemplars, 73.0% (N = 149) of the teachers agreed that they can be consid-
ered as feedback, while 17.2% (N = 35) opposed and 9.8% (N = 20) voted ‘It depends’. It should
be noted that 44 teachers did not respond to this item on the survey within the designated
time, which may indicate their ambiguous attitudes towards exemplars.
Teachers who understood generic exemplars as feedback emphasized their potential to help
students better understand their work. Exemplars ‘give students a sense of how instructors’
expectations often vary a lot to students’ expectations’. Similar to rubrics, exemplars are con-
sidered useful in fulfilling the ‘improvement in self-evaluation’ and ‘improvement in understanding
standards’ purposes of feedback, providing opportunities for students to understand and reflect
on their own performance.
A central concern preventing exemplars from acting as ‘feedback’ is student plagiarism, also
noted by Carless and Chan (2017). As one teacher commented:

No. Because then a lot of students just copy that exactly in terms of answer, and they just copy that
into their work without thinking since they know this is right.

These teachers questioned the learning potential of exemplars, and did not see how they
would largely contribute to student learning.
A lack of personalisation and details were again mentioned by teachers, indicating that
exemplars cannot ‘pinpoint individual issues of students’. The teachers believed feedback should
go beyond simply offering ‘answers’ to students: ‘sometimes our assessments are giving students
the answers they need but in feedback students get better information’. The unfavourable
responses to some extent corroborate what Dawson et al. (2019) expressed in their study that
exemplars do not fit tightly into teachers’ common definitions of feedback, and hence are not
recognized as feedback.

Discussion
In this study, when teachers identified a pedagogical practice as feedback, their understanding
and prioritisation of feedback purposes played a central role. For example, for teachers who
12 C. K. Y. CHAN AND J. LUO

recognised the ‘affective’ purpose of feedback, they tended to accept stamps/badges as a legit-
imate feedback form; and yet, for those who prioritise the ‘improvement’ purpose, they tended
to reject stamps/badges as feedback because they are not specific enough for students to act
upon. Not all teachers believed and recognised that feedback can have very different purposes
and can be instantiated in different pedagogical practices.
These findings have pointed to the importance of cultivating teacher feedback literacy, in
particular how teachers manage practicalities in feedback processes. There is no ‘perfect’ feed-
back that addresses all needs – even if there is, it is often not feasible (or indeed desirable)
considering teachers’ constraints in time, workload and class size (Henderson, Ryan, and Phillips
2019), and a range of complex factors (e.g. space, power relation) influencing student engage-
ment with feedback (Chong 2021; Gravett 2020). The issue of how teachers distinguish feedback
practices is essentially a process of teachers making compromises between ‘what might be
ideal, what seems defensible, and what they think students want’ (Carless and Winstone
2020, p.7).
Therefore, in terms of teacher feedback literacy, this study argues that it is important for
teachers to be aware of the multiple purposes of feedback and how different pedagogical
practices can serve to satisfy these different feedback purposes at different settings and times.
For example, although stamps/badges are not specific enough to ‘let students know what
aspects are done well’, they are particularly strong to fulfil the ‘affective’ purpose of feedback
by ‘cheering students up’ - this may be necessary for teachers to build a positive relationship
with the students and encourage them. In recent years, digital credentialing has emerged as
an alternative to traditional paper credentials to recognise student achievements in online
learning (Gibson et al. 2015), and to highlight specific skills and abilities that students have
acquired (LaMagna 2017). In Tomić et al.’s (2019) case study where Open Badges were used as
part of the assessment for students’ programming and soft skills, it was found that students’
perceptions of these badges were generally positive. In the future, digital badges may become
useful ‘affective feedback’ for university students.
With regards to grades, some teachers argued that grades belong to assessment instead of
feedback, others recognized grades’ potential to ‘let students know where they stand’, hence
contributing to the ‘improvement in understanding’ purpose (Lipnevich & Smith, 2014). Correction
is central to the conventional definition of feedback, serving well to ‘identify weaknesses’ (Ellis
2006). Whole-class comments, though not personally directed, are effective in terms of ‘high-
lighting common things to note’ and ‘saving class time’ (Race 2005). Depending on the content,
whole-class comments can serve different purposes such as ‘affective’ and ‘improvement’. And
of course, whole-class comments are time-efficient. Rubrics and exemplars correspond with
the sustainable model of feedback, which emphasizes student agency in feedback processes
(Price, Handley, and Millar 2011). As noted by some teachers, they can effectively serve the
‘improvement in self-evaluation’ and ‘improvement in understanding standards’ purpose of
feedback. The use of rubrics and exemplars is congruent with a recent focus on assessment
and feedback literature, which highlights students’ ‘evaluative judgement’ to make decisions
about their work quality and performance (Tai et al. 2018).
Many researchers have acknowledged the different functions of feedback ‘depending on the
learning environment, the needs of the learner, the purpose of the task, and the particular
feedback paradigm adopted’ (Knight and Yorke 2003; Poulos and Mahony 2008, as cited in Evans
2013, p. 71). As noted by Yorke (2003), assessment in higher education calls for multidimensional
performances, hence in response, the feedback must also match that complexity. Understanding
the multiple purposes of feedback is important because of their fundamental role in enhancing
feedback effectiveness – it would be impossible to judge its effectiveness unless teachers make
clear what purpose feedback is striving to achieve (Hattie and Timperley 2007; Evans 2013).
‘Within the feedback process, clarity of purpose must be shared by all parties to enable eval-
uation to be useful’ (Price et al. 2010, p. 278).
Assessment & Evaluation in Higher Education 13

In sum, the central message gleaned from this study is – it is important for teachers to
recognise the features of a range of possible feedback practices, and then direct the appropriate
type of feedback to a certain task at a certain time based on its intended purpose – and this
should be recognised as an essential component of teacher feedback literacy. As pointed out
by Evans (2013, p. 105), the fundamental thing to effective feedback is ‘whether the nature of
feedback is fit for purpose relative to the task and lecturers and students are convergent in
their expectations of feedback’. Teachers should know how to devise and structure their feedback
plan for student learning intentionally. There is more than one single ‘perfect’ way to deliver
feedback – a well-structured feedback design making good use of different feedback practices
for different task purposes will help make a very effective case. Feedback purposes should be
considered as part of the feedback design, and open communication about feedback purposes
between teachers (particularly those teaching the same course) and students are needed in
order to align with the feedback expectations.

Implications
This study adds to the scarce body of literature exploring how university teachers distinguish
pedagogical practices as feedback. Apart from extending the understanding of feedback from
a particular teacher-oriented lens, the findings also provide important implications for developing
teacher feedback literacy.
First, the research foregrounds and elaborates on an important component of teacher feed-
back literacy, which has not been made specifically clear in the feedback literacy literature. That
is, feedback literate teachers recognise multiple purposes of feedback and how different ped-
agogical practices can serve to satisfy these purposes.
Second, the research provides practical implications to strengthen teacher feedback literacy
and improve feedback practices. Instead of studying which type of feedback is the most
effective, teachers are advised to prepare their own ‘feedback toolkit’ which incorporates
various feedback practices suitable for different feedback purposes. It would be beneficial if
teacher education helps teachers comprehend the features of different feedback types and
the multiple purposes of feedback (e.g. the limitations and potential of stamps/badges, and
how they fulfil the affective purpose). In this sense, teachers can hopefully structure their
own feedback plan by directing the most suitable feedback practice to a particular purpose/
task at a particular period of time. In the long run, by combining a range of different feed-
back practices, it is expected to compensate individual feedback weaknesses and enhance
students’ overall feedback experience.

Limitations and future study


A major limitation of this study lies in the discussion data. While the workshop provided a good
platform for teachers to discuss their ideas, the size (around 20-30 participants per workshop)
of the workshop made it difficult to guarantee every teacher had been granted ample oppor-
tunities to voice their opinions. Future studies can use focus-group interviews (5-7 participants
per time) to allow more time and room for each teacher to engage in the discussion. That said,
the workshop format did offer a natural learning and reflection opportunity for the teachers,
which built directly into their professional development training.
Another limitation is the survey, which elicited very simple nominal data. Although it has
achieved its goals, which was to facilitate the discussion, in the future a more sophisticated
instrument can be designed to investigate teacher perceptions of different feedback practices.
It would also be interesting to adopt a longitudinal design and research how teachers actually
make use of different feedback practices. Future research may also include students’ perceptions
14 C. K. Y. CHAN AND J. LUO

of different feedback types and investigate whether there is a perception mismatch between
teachers and students.

Conclusion
The field of feedback research has already had rich contributions. However, among a large body
of feedback literature, a fundamental question, i.e. how to distinguish a pedagogical practice
as feedback, has unfortunately received scarce attention. Particularly, few studies have attempted
to answer this question from university teachers’ perspectives.
The current study addresses this research gap by investigating university teachers’ feedback
perceptions, specifically of whether six pedagogical practices should be considered as feedback.
From the findings, it was found that teachers’ understandings of feedback purpose played an
influential role.

ORCID
Cecilia Ka Yuk Chan http://orcid.org/0000-0001-6984-6360
Jiahui Luo http://orcid.org/0000-0003-1797-2191

References
Bell, A., R. Mladenovic, and M. Price. 2013. “Students’ Perceptions of the Usefulness of Marking Guides,
Grade Descriptors and Annotated Exemplars.” Assessment & Evaluation in Higher Education 38 (7):
769–788. doi:10.1080/02602938.2012.714738.
Boud, D., and N. Falchikov. 2006. “Aligning Assessment with Long‐Term Learning.” Assessment &
Evaluation in Higher Education 31 (4): 399–413. doi:10.1080/02602930600679050.
Boud, D., and E. Molloy. 2013. “Rethinking Models of Feedback for Learning: The Challenge of Design.”
Assessment & Evaluation in Higher Education 38 (6): 698–712. doi:10.1080/02602938.2012.691462.
Braun, V., and V. Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology
3 (2): 77–101. [Database] doi:10.1191/1478088706qp063oa.
Brown, G. T. L., L. R. Harris, and J.Harnett. 2012. “Teacher Beliefs about Feedback within an Assessment
for Learning Environment: Endorsement of Improved Learning over Student Well-Being.” Teaching
and Teacher Education 28 (7): 968–978. doi:10.1016/j.tate.2012.05.003.
Carey, K. L., and J. E. Stefaniak. 2018. “An Exploration of the Utility of Digital Badging in Higher
Education Settings.” Educational Technology Research and Development 66 (5): 1211–1229. doi:10.1007/
s11423-018-9602-1.
Carless, D. 2006. “Differing Perceptions in the Feedback Process.” Studies in Higher Education 31 (2):
219–233. doi:10.1080/03075070600572132.
Carless, D., and D. Boud. 2018. “The Development of Student Feedback Literacy: Enabling Uptake of
Feedback.” Assessment & Evaluation in Higher Education 43 (8): 1315–1325. doi:10.1080/02602938.2
018.1463354.
Carless, D., and K. K. H. Chan. 2017. “Managing Dialogic Use of Exemplars.” Assessment & Evaluation
in Higher Education 42 (6): 930–941. doi:10.1080/02602938.2016.1211246.
Carless, D., D. Salter, M. Yang, and J. Lam. 2011. “Developing Sustainable Feedback Practices.” Studies
in Higher Education 36 (4): 395–407. doi:10.1080/03075071003642449.
Carless, D., and N. Winstone. 2020. Teacher feedback literacy and its interplay with student feedback
literacy. Teaching in Higher Education.
Chan, C. K. Y., and J. Luo. 2020. A four-dimensional conceptual framework for student assessment
literacy in holistic competency development. Assessment & Evaluation in Higher Education.
Cheng, M. W. T., and C. K. Y. Chan. 2019. “An Experimental Test: Using Rubrics for Reflective Writing
to Develop Reflection.” Studies in Educational Evaluation 61: 176–182. doi:10.1016/j.stueduc.2019.04.001.
Chong, S. W. 2019. “The Use of Exemplars in English Writing Classrooms: From Theory to Practice.”
Assessment & Evaluation in Higher Education 44 (5): 748–763. doi:10.1080/02602938.2018.1535051.
Assessment & Evaluation in Higher Education 15

Chong, S. W. 2021. “Reconsidering Student Feedback Literacy from an Ecological Perspective.” Assessment
& Evaluation in Higher Education 46 (1): 92–104. doi:10.1080/02602938.2020.1730765.
Dawson, P., M. Henderson, P. Mahoney, M. Phillips, T. Ryan, D. Boud, and E. Molloy. 2019. “What Makes
for Effective Feedback: Staff and Student Perspectives.” Assessment & Evaluation in Higher Education
44 (1): 25–36. doi:10.1080/02602938.2018.1467877.
Ellis, R. 2006. “Researching the Effects of Form-Focussed Instruction on L2 Acquisition.” AILA Review
19: 18–41. doi:10.1075/aila.19.04ell.
Evans, C. 2013. “Making Sense of Assessment Feedback in Higher Education.” Review of Educational
Research 83 (1): 70–120. doi:10.3102/0034654312474350.
Ferguson, P. 2011. “Student Perceptions of Quality Feedback in Teacher Education.” Assessment &
Evaluation in Higher Education 36 (1): 51–62. doi:10.1080/02602930903197883.
Gezie, Abebaw, Khadija Khaja, Valerie Nash Chang, Margaret E. Adamek, and Mary Beth Johnsen.
2012. “Rubrics as a Tool for Learning and Assessment: What Do Baccalaureate Students Think.”
Journal of Teaching in Social Work 32 (4): 421–437. doi:10.1080/08841233.2012.705240.
Gibbs, G., and C. Simpson. 2004. “Conditions under Which Assessment Supports Students’ Learning.”
Learning and Teaching in Higher Education 1: 3–31.
Gibson, D., N. Ostashewski, K. Flintoff, S. Grant, and E. Knight. 2015. “Digital Badges in Education.”
Education and Information Technologies 20 (2): 403–410. doi:10.1007/s10639-013-9291-7.
Gravett, K. 2020. “Feedback Literacies as Sociomaterial Practice.” Critical Studies in Education.
Hattie, J., and H. Timperley. 2007. “The Power of Feedback.” Review of Educational Research 77 (1):
81–112. [Database] doi:10.3102/003465430298487.
Henderson, M., T. Ryan, and M. Phillips. 2019. “The Challenges of Feedback in Higher Education.”
Assessment & Evaluation in Higher Education 44 (8): 1237–1252. doi:10.1080/02602938.2019.1599815.
Irving, S. E., L. R. Harris, and E. R. Peterson. 2011. “One Assessment Doesn’t Serve All the Purposes’
or Does It? New Zealand Teachers Describe Assessment and Feedback.” Asia Pacific Education Review
12 (3): 413–426. doi:10.1007/s12564-011-9145-1.
Ivanič, R., R. Clark, and R. Rimmershaw. 2000. “What Am I Supposed to Make of This? The Messages
Conveyed to Students by Tutors’ Written Comments.” In M. Lea & B. Stierer (Eds.), Student Writing
in Higher Education: new Contexts 47–65. Buckingham: Open University Press.
Knight, P., and M. Yorke. 2003. Assessment, Learning and Employability. Maidenhead, UK: SRHE/Open
University Press.
LaMagna, M. 2017. “Placing Digital Badges and Micro-Credentials in Context.” Journal of Electronic
Resources Librarianship 29 (4): 206–210. doi:10.1080/1941126X.2017.1378538.
Li, J., and R. De Luca. 2014. “Review of Assessment Feedback.” Studies in Higher Education 39 (2):
378–393. doi:10.1080/03075079.2012.709494.
Lipnevich, A., and J. Smith. 2014. Response to Assessment Feedback: The Effect of Grades, Praise, and
Source of Information (ETS RR-08-30). Princeton, NJ: ETS.
Lizzio, A., and K. Wilson. 2008. “Feedback on Assessment: Students’ Perceptions of Quality and
Effectiveness.” Assessment & Evaluation in Higher Education 33 (3): 263–275. doi:10.1080/
02602930701292548.
Mackey, A., M. Al-Khalil, G. Atanassova, M. Hama, A. Logan-Terry, and K. Nakatsukasa. 2007. “Teachers’
Intentions and Learners’ Perceptions about Corrective Feedback in the L2 Classroom.” Innovation
in Language Learning and Teaching 1 (1): 129–152. doi:10.2167/illt047.0.
McDaniel, R., and J. R. Fanfarelli. 2015. “A Digital Badging Dataset Focused on Performance, Engagement
and Behavior-Related Variables from Observations in Web-Based University Courses.” British Journal
of Educational Technology 46 (5): 937–941. doi:10.1111/bjet.12272.
Nelson, M. M., and C. D. Schunn. 2009. “The Nature of Feedback: How Different Types of Peer Feedback
Affect Writing Performance.” Instructional Science 37 (4): 375–401. doi:10.1007/s11251-008-9053-x.
Nicol, D. J., and D. Macfarlane‐Dick. 2006. “Formative Assessment and Self‐Regulated Learning: A
Model and Seven Principles of Good Feedback Practice.” Studies in Higher Education 31 (2): 199–218.
doi:10.1080/03075070600572090.
Nordrum, L., K. Evans, and M. Gustafsson. 2013. “Comparing Student Learning Experiences of in-Text
Commentary and Rubric-Articulated Feedback: Strategies for Formative Assessment.” Assessment &
Evaluation in Higher Education 38 (8): 919–940. doi:10.1080/02602938.2012.758229.
16 C. K. Y. CHAN AND J. LUO

O’Donovan, B., M. Price, and C. Rust. 2001. “The Student Experience of Criterion-Referenced Assessment
(through the Introduction of a Common Criteria Assessment Grid).” Innovations in Education and
Teaching International 38 (1): 74–85. doi:10.1080/147032901300002873.
Pajares, M. F. 1992. “Teachers’ Beliefs and Educational Research: Cleaning up a Messy Construct.”
Review of Educational Research 62 (3): 307–332. doi:10.3102/00346543062003307.
Perera, J., N. Lee, K. Win, J. Perera, and L. Wijesuriya. 2008. “Formative Feedback to Students: The
Mismatch between Faculty Perceptions and Student Expectations.” Medical Teacher 30 (4): 395–399.
doi:10.1080/01421590801949966.
Poulos, A., and M. J. Mahony. 2008. “Effectiveness of Feedback: The Students’ Perspective.” Assessment &
Evaluation in Higher Education 33 (2): 143–154.
Price, M., K. Handley, and J. Millar. 2011. “Feedback: Focusing Attention on Engagement.” Studies in
Higher Education 36 (8): 879–896. doi:10.1080/03075079.2010.483513.
Price, M., K. Handley, J. Millar, and B. O’donovan. 2010. “Feedback: all That Effort, but What is the Effect?”
Assessment & Evaluation in Higher Education 35 (3): 277–289.
Race, P. 2005. “Using Feedback to Help Students to Learn.” The Higher Education Academy, (9 October
2007).
Reddy, Y. M., and H. Andrade. 2010. “A Review of Rubric Use in Higher Education.” Assessment &
Evaluation in Higher Education 35 (4): 435–448. doi:10.1080/02602930902862859.
Sadler, D. R. 2010. “Beyond Feedback: Developing Student Capability in Complex Appraisal.” Assessment
& Evaluation in Higher Education 35 (5): 535–550.
Scoles, J., M. Huxham, and J. McArthur. 2013. “No Longer Exempt from Good Practice: Using Exemplars
to Close the Feedback Gap for Exams.” Assessment & Evaluation in Higher Education 38 (6): 631–645.
doi:10.1080/02602938.2012.674485.
Tai, J., R. Ajjawi, D. Boud, P. Dawson, and E. Panadero. 2018. “Developing Evaluative Judgement:
Enabling Students to Make Decisions about the Quality of Work.” Higher Education 76 (3): 467–481.
doi:10.1007/s10734-017-0220-3.
Tomić, Bojan, Jelena Jovanović, Nikola Milikić, Vladan Devedžić, Sonja Dimitrijević, Dragan Đurić, and
Zoran Ševarac. 2019. “Grading Students’ Programming and Soft Skills with Open Badges: A Case
Study.” British Journal of Educational Technology 50 (2): 518–530. doi:10.1111/bjet.12564.
Torrance, H. 2007. “Assessment as Learning? How the Use of Explicit Learning Objectives, Assessment
Criteria and Feedback in Post‐Secondary Education and Training Can Come to Dominate Learning.”
Assessment in Education 14 (3): 281–294. doi:10.1080/09695940701591867.
Wang, B., S. Yu, and T. Teo. 2018. “Experienced EFL Teachers’ Beliefs about Feedback on Student Oral
Presentations.” Asian-Pacific Journal of Second and Foreign Language Education 3 (1): 1–13. doi:10.1186/
s40862-018-0053-3.
Willis, J. E., III, K. Flintoff, and B. McGraw. 2016. “A Philosophy of Open Digital Badges.” In Foundation
of Digital Badges and Micro-Credentials: Demonstrating and Recognizing Knowledge and Competencies
edited by D. Ifenthaler, N. Bellin-Mularski, & D.-K. Mah, 23–40. Cham, Switzerland: Springer
International Publishing Switzerland.
Yorke, M. 2003. “Formative Assessment in Higher Education: Moves towards Theory and the
Enhancement of Pedagogic Practice.” Higher Education 45 (4): 477–501. doi:10.1023/A:1023967026413.

You might also like