You are on page 1of 15

Assessing Writing 49 (2021) 100530

Contents lists available at ScienceDirect

Assessing Writing
journal homepage: www.elsevier.com/locate/asw

Insights into the cognitive processes of trained vs untrained EFL


peer reviewers on writing: An exploratory study
Alireza Memari Hanjani
Department of English Language, Islamshahr Branch, Islamic Azad University, Islamshahr, Iran

A R T I C L E I N F O A B S T R A C T

Keywords: While research on various aspects of peer review in ESL/EFL writing has been burgeoning in the
EFL writing past two decades, studies comparing the cognitive processes of trained and untrained L2 peer
Peer evaluation reviewers have been scant. This case study endeavored to address this gap by recruiting ten senior
Peer review training
EFL university students and randomly assigning them into trained and untrained groups. While
Think-aloud
Cognitive processes
both groups attended some common preparation sessions including learning to think-aloud, only
one of the groups participated in individual review training conferences, where they learned how
to evaluate cause and effect essays using the instructions and employing peer review sheets. The
two groups then proceeded to review two sample student essays while thinking aloud. Hence,
data analysis included examining the reviewers’ recorded think-alouds and the essays they
evaluated. In general, the results indicated that instruction could improve the type of trained
reviewers’ comments even though it seemed ineffective in terms the focus and quality on the
surface. The article ends with some tentative pedagogical implications which may contribute to
successful incorporation of peer evaluation in EFL writing contexts.

1. Introduction

L2 literature highlights the value of peer review in ESL/EFL writing at all educational levels. It is reported that this technique can
improve the quality of essays written by learners (Memari Hanjani & Li, 2014b; Berg, 1999; Huang, 2004; Min, 2005, 2006; Peng,
2007; Yang, Badger, & Zhen, 2006), contribute to learner autonomy (Tsui & Ng, 2000; Yang et al., 2006), enhance learners’ audience
awareness (Jacobs, Curtis, Braine, & Huang, 1998; Liu & Sadler, 2003; Mendonca & Johnson, 1994; Paulus, 1999; Tsui & Ng, 2000),
encourage critical reading and evaluative skills (Memari Hanjani & Li, 2014b; Berg, 1999; Rollinson, 2005; Ting & Qin, 2010), develop
positive attitudes toward writing (Min, 2005), boost a sense of ownership of the text (Tsui & Ng, 2000), improve confidence and
language skills (Byrd, 2003; Min, 2006), establish a supportive and stress-free atmosphere in class (Memari Hanjani & Li, 2014b; Ting
& Qin, 2010), and finally facilitate interaction and negotiation of meaning, collaborative learning and co-construction of knowledge
(Lundstrom & Baker, 2009; Villamil & De Guerrero, 1996).
Hence, the reported advantages of peer review in ESL/EFL writing contexts have prompted many practitioners and teachers to
supplement their traditional teacher feedback approach with this technique in L2 writing classes (Memari Hanjani & Li, 2014a;
Memari Hanjani, 2016). However, it is argued that such benefits are not guaranteed to occur unless L2 students possess the necessary
skills in reviewing their peers’ writings. More precisely, in order to be more effective, instructional intervention and peer review
training are required to prepare L2 learners to engage in this task (Memari Hanjani & Li, 2014a; Berg, 1999; Hu, 2005; Lam, 2010; Liou

E-mail address: Memari@gmail.com.

https://doi.org/10.1016/j.asw.2021.100530
Received 15 July 2020; Received in revised form 2 April 2021; Accepted 6 April 2021
Available online 10 April 2021
1075-2935/© 2021 Elsevier Inc. All rights reserved.
A. Memari Hanjani Assessing Writing 49 (2021) 100530

& Peng, 2009; Min, 2005, 2006, 2008, 2016; Yang & Meng, 2013; Zhu, 1995).
To date, some researchers have investigated the role of peer review training on the stances of L2 peer reviewers and the quality,
focus, and accuracy of their comments both in traditional face-to-face and on-line situations (Memari Hanjani, 2013; Berg, 1999; Lam,
2010; Liou & Peng, 2009; Min, 2005; Yang & Meng, 2013; Zheng, 2012; Zhu, 1995) as well as feedback incorporation behaviors of L2
writers, the inter- and intra-factors involving peer evaluation, peer reviewers’ individual differences, difficulties and challenges
(Memari Hanjani, 2013; Berg, 1999; Hu, 2005; Lam, 2010; Liou & Peng, 2009; Min, 2006; Panadero, 2016; Yu, 2020; Yu & Hu, 2017;
Zhao, 2010; Zhu & Carless, 2018).
Yet, despite the abundant research conducted on different aspects of peer review training in L2 writing context, our understanding
of the mental processes of the trained peer reviewers compared with their untrained counterparts is limited. This knowledge is
essential as it enables ESL/EFL writing practitioners to select, adapt, and design appropriate peer review training activities to facilitate
L2 students’ peer review skill development. It is also important to grasp whether trained peer reviewers approach this task differently
from their untrained peers. To serve that end, the present case study was set to fill this void by examining the trained and untrained EFL
reviewers’ cognitive processes regarding the nature, focus, and quality of their comments. It should be noted that in this study
cognitive processes refer to the sequence of verbalized thoughts which reflect the decisions and choices made by L2 student reviewers
in terms of feedback focus, type, and accuracy when assessing their peers’ papers.

2. Literature review

The popularity of peer review in L2 writing context over the past two decades has prompted many researchers to investigate
different aspects of this technique (Memari Hanjani, 2013, 2016; Byrd, 2003; Diab, 2010, 2011; Hu, 2005; Lam, 2010; Kamimura,
2006; Lundstrom & Baker, 2009; Min, 2005, 2006; Morra & Romano, 2009; Ting & Qin, 2010; Tsui & Ng, 2000; Villamil & De
Guerrero, 1996, 1998; Wang, 2014; Yang et al., 2006; Zhu, 2001; Zhu & Mitchell, 2012). One strand of research has concentrated on
peer review training and its impact on the L2 reviewers’ stances, as well as focus (local and global), type (revision and non-revision
oriented), and quality (accuracy and validity) of their feedback feedback (Memari Hanjani, 2013; Berg, 1999; Lam, 2010; Liou & Peng,
2009; Min, 2005; Yang & Meng, 2013; Zheng, 2012; Zhu, 1995). Another strand of research has investigated its effect on feedback
incorporation behaviors of L2 writers, the inter- and intra-factors involving peer evaluation, peer reviewers’ individual differences,
difficulties and challenges (Memari Hanjani, 2013; Memari Hanjani & Li, 2014a; Berg, 1999; Hu, 2005; Lam, 2010; Liou & Peng, 2009;
Min, 2006; Panadero, 2016; Yu, 2020; Yu & Hu, 2017; Zhao, 2010; Zhu & Carless, 2018). The findings of these studies have confirmed
that training can improve not only the feedback quantity and quality of L2 reviewers, but also the feedback use by L2 writers and
consequently the quality of their revised drafts under certain conditions (Hu, 2005; Liu & Hansen, 2002; Min, 2005, 2006, 2008).
For instance, Hu’s (2005) longitudinal action research revealed that to be pedagogically productive, peer review should be
combined with extensive and appropriate training. He also added that training developed students’ favorable attitudes towards peer
review, improved the quality and quantity of peer comments as they focused on both form and content of the papers they evaluated,
increased peer feedback incorporation rate, and enhanced the quality of learners’ revised drafts and writing skill.
Min (2005, 2006, 2008) reported the findings of a classroom study investigating the effect of training on L2 reviewers’ stances,
feedback focus and quality as well as L2 review receivers’ feedback incorporation behaviors and the quality of their revised drafts. Her
findings indicated that after training, student reviewers used relatively more collaborative stances, and were able to produce greater
amount of feedback most of which were relevant and specific concentrating on global issues. The revised drafts developed after peer
evaluation, on the other hand, revealed that not only did student writers incorporate a significantly higher number of reviewers’
comments into their papers, but also peer feedback training did have a significantly higher impact on students’ revisions, and the
number of revisions with enhanced quality was significantly higher than those before peer review training. Though her findings are
very illuminative, the internal validity of the investigations is problematic as she conducted the studies with a trained group only and
no control group was recruited. Hence, it is not clear whether the observed changes in the students’ comments and their revisions can
be attributed to the training they received or some other factors such as practice with writing might have been involved affecting the
results and discrediting the conclusions reached in her studies.
She further examined the effect of different peer review training methods on the quality of comments delivered on higher-order
issues by L2 evaluators (Min, 2016) and stressed that even though the group that watched the mastery model and received correc­
tion plus explication as feedback made significantly better progress than the other groups, in general peer review training improved the
peer review skills of all learners. Again, the absence of an un-trained group in her more recent research can imply that the findings of
her earlier research (Min, 2005, 2006) had already convinced her that peer review training had positive effects on the quality of peer
reviewers’ comments.
Continuing this line of research, Lam (2010) and Rahimi (2013) arranged peer review training sessions for non-English majors and
EFL learners in two different contexts. Drawing on the data elicited from the participants, they concluded that training equipped
learners with the revision skill needed for conducting successful peer review activities, shifted the participants’ focus from local to
global issues, improved peer feedback incorporation rate in their following drafts, and consequently contributed to developing higher
quality papers. However, Lam’s report was based only on four learners’ reflections and no cross checking was made between the
participants’ attitudes and their first and revised drafts. Hence, in order to be more reliable, such findings need verification/­
triangulation by other data sources including evaluation of the quality of learners’ papers before and after peer review training.
Likewise, Rahimi failed to clarify whether all peer comments were revision oriented or not. Obviously, the nature of comments
(requiring revision or not) is a significant issue that could influence some of the findings of his study.
In an effort to understand the influence of L2 reviewers’ individual differences and contextual factors on their feedback

2
A. Memari Hanjani Assessing Writing 49 (2021) 100530

performance, Yu and Hu (2017) conducted a case study the findings of which disclosed some similarities and differences in the re­
viewers’ practices even though the training they received at the beginning of the study was the same. They attributed these feedback
variants to a number of factors such as “their philosophy about feedback and writing, their motives and goals, and their attitudes
towards face-saving and group harmony (p.33).” Drawing on the findings, they stressed that short training sessions were inadequate as
students’ peer feedback practices were deeply rooted in their beliefs and motives and proposed that one possible solution to address L2
reviewers’ different approaches would be constant peer feedback training sessions.
Likewise, reporting the difficulties and challenges L2 peer reviewers faced providing genre-based feedback, Yu (2020) argued that
inadequate training and limited resources imposed a greater cognitive load on L2 learners and negatively affected the quality of their
comments. Hence, their feedback mainly concentrated on the linguistic features, content and organization of the theses they evaluated
rather than their genre-related aspects. He subsequently stressed that the reviewers’ challenges were caused by their unfamiliarity with
the genre-requirements, lack of confidence in their linguistic abilities, and concerns for not hurting the review receivers.
Further, Zheng (2012) conducted an ethnographic study to understand the dynamics and process of peer review activity from the
sociocultural perspective. By analyzing multiple data sets he identified five integration patterns among leaners: collaborative,
expert–novice, dominant–dominant, dominant–passive and passive–passive. The findings suggested that while the first two stances
could facilitate learning, the others were not that much constructive concluding that in order to overcome peer review challenges,
“teacher’s tutoring was necessary, acting as facilitator, counselor, mediator or even co-learner (p. 124).”
Stressing the constant role of teacher instructions in the success of peer review process, Zhu and Carless (2018) inquired the
perceived benefits of dialoguing in peer feedback for both feedback provider and receiver. They noted that the dialogue between the
learners could facilitate the experience as it activated their cognitive processes and provided the reviewers the opportunity to justify
their comments and for receivers to respond to the feedback. Besides, as the authors argued, verbal interaction could improve
negotiation of meanings among the participants and consequently feedback incorporation potential.
Zhao (2010) also underlined the key role of understanding in feedback incorporation. Comparing two sources of feedback (teacher
vs peer) in terms of its use, he reported that even though L2 learners’ understanding of the value their teacher’s feedback was less than
their classmates’, they used their teachers’ feedback more than that of their peers’ in their revised drafts. He attributed the learners’
behaviors to their attitudes towards the importance and credibility of teacher’s comments. However, it can be argued that judging the
feedback incorporation rate of the teacher and peers by simply comparing their incorporation frequency in learners’ subsequent drafts
may be problematic as many other factors may involve. Besides, perhaps ignoring the essential role of peer evaluation training -as he
himself admitted it– has had a negative effect on peer feedback use by student writers.
Extending this line of research to electronic context, Chang (2015) and Liou and Peng (2009) investigated peer reviewers’ com­
ments in weblogs before and after training and following prolonged one-to-one teacher modeling feedback in asynchronous web-based
writing respectively. The findings of both studies indicated that instruction enhanced the participants’ peer review skill as they
produced more revision-oriented comments, focused more significantly on global issues, took more collaborative stance, and produced
a higher percentage of personal and non-evaluative reader comments. However as Chang’s study lacked a control group, and the
researcher simply compared one narrative and one process essays, it could be argued that any difference might not necessarily be the
effect of peer review training and the instructor modeling.
Finally, in his review article Panadero (2016) argued that promoting trust and perceived fairness between student reviewers and
review receivers and consequently increasing the incorporation rate of peer feedback, depends on the competence of peers performing
the evaluations. Hence, learners need to be trained and engaged in intensive practice which “comes with the ‘cost’ of taking more
classroom time as students need to be taught and scaffolded as to how to perform PA [peer assessment] (p.262).”
Overall, the literature on peer review training confirms that instruction and use of complementary tools such as peer review sheets
can improve EFL students’ evaluation skills both qualitatively and quantitatively on the one hand, and the feedback incorporation rate
and the quality of revised drafts developed by L2 learners post peer feedback on the other (Lam, 2010; Liou & Peng, 2009; Liu & Sadler,
2003; Lundstrom & Baker, 2009; Min, 2005, 2006; Tseng, 2007). Even though these findings are informative, follow-up studies
comparing the mental processes of trained and untrained peer reviewers have been overlooked in L2 writing peer feedback research
and deserve our attention. In other words, the studies reviewed above focused on the product of peer review and not on its process. To
the best of my knowledge, investigations showing awareness of this issue have been scarce as researchers have employed observations,
texts, and interviews to collect data such as learners’ interactional exchanges, text revisions, and reflections qualitatively and quan­
titatively. Although such data are very useful, they have their own limitations and fail to explore the mental aspect of trained and
untrained reviewers’ essay evaluation activity. Indeed, the findings of these research are the surface expression of the effects of training
on learners’ performance and cannot fully reveal the underlying effects on their mental constructs such as their decision making
behaviors. Little knowledge is available on how different the mental processes of trained L2 reviewers are compared to their untrained
counterparts when evaluating their peers’ papers in terms of type, focus, and quality. Hence, there is a need for an in-depth analysis of
trained and untrained learners’ decision making behaviors, processes, and approaches while evaluating papers. One method that can
address this gap is think-aloud protocol as it enables the researchers to examine what learners actually think when assessing their
peers’ essays from a new perspective that may provide insight into the possible differences trained and untrained peer reviewers
approach assessing their classmates’ essays. Such understanding is illuminative and allows ESL/EFL writing practitioners to choose,
adapt, and design appropriate peer review training activities which address L2 reviewers’ challenges, boost their evaluation perfor­
mance, increase feedback incorporation rate, and improve the quality of the revised papers.
The paucity of empirical research on trained and untrained peer reviewers’ mental processes and decision making behaviors was,
therefore, a driving force to examine and compare the cognitive processes underlying the trained and untrained L2 peer reviewers
concerning the type, focus, and accuracy of their comments by employing recorded think-alouds and textual analyses. More

3
A. Memari Hanjani Assessing Writing 49 (2021) 100530

specifically, the present study aimed to understand to what extent one-to-one researcher–reviewer training sessions could influence the
characteristics of the L2 reviewers’ cognitive processes by examining their feedback type (revision and non-revision oriented), focus
(local and global), quality (accuracy and inaccuracy). The following question guided the study:

• Is there any observable difference between trained and untrained L2 peer reviewers’ cognitive processes in terms of type, focus, and
quality of feedback?

3. Methodology

3.1. Context and participants

The participants of the study were ten senior EFL students majoring in English translation bachelor’s program at a private uni­
versity in Iran during the autumn semester of 2019. All of them had already passed academic essay writing course as a mandatory
module, but had not received any instructions on peer review, nor had they participated in any forms of peer review activity before the
study. The participants were selected from a pool of 20 volunteers. Before the study started, all the volunteer students were asked to
develop a paper on a prompt ‘What effects do exercise have on our body?’ to evaluate their writing proficiency. The researcher used
Multiple-trait scoring rubric to assess the student sample papers and based on the results four males and six females whose age ranged
from 23 to 25 years, with the average age being 24 were selected. All the participants were native speakers of Persian and had studied
English for more than eight years at the time of the study and according to the recorded files in the English department their English
proficiency, measured by paper-and-pencil Test of English as a Foreign Language (TOEFL), was approximately 480, estimated to be
upper-intermediate level. The detailed demographic information of the participants can be seen in Table 1.

3.2. Data collection

At this stage the participants were randomly assigned into “trained” and “un-trained” groups and attended some common and
exclusive preparation sessions as follows (Fig. 1 summarizes major activities performed during each session):
Step 1 (essay reviewing). Although all the participants had passed Academic Essay Writing course before, two review sessions
were arranged for both groups to ensure that they knew English academic essay conventions especially cause and effect essay before
the peer review activity. During the first session, the students were introduced to components and structural elements of academic
essays and their functions. In the second session, cause and effect essays were discussed, their components, focus, functions, orga­
nizations, and techniques for developing them were illustrated and the participants were given opportunities to identify these features
in a couple of model essays.
Step 2 (training). The instructions which were arranged for each group were different and are illustrated as follows:

1 The trained group: each member of the trained group attended three 30 min one-on-one researcher–reviewer conference sessions.
During the first session, the researcher provided the reviewers with a copy of a sample cause essay which was developed by an
anonymous student along with a peer review sheet. Then, thinking out loud, he demonstrated how to make comments to the paper
and addressed its local and global errors/problems following the guidelines provided by the peer review sheet (Appendix 1). In the
following session, the same procedure was followed this time evaluating a sample effect essay. To make sure that all the group
members had learned peer review and think-aloud techniques, in the last session each student was provided with one cause and one
effect drafts and was asked to practice thinking aloud when evaluating the papers following the same procedures they had already
learned. The model essays, belonging to the previous generation of students, had been selected quite purposefully and fully lent
themselves to the purpose of this study. In effect, they included both local and global errors. The researcher discussed the ambi­
guities with the participants in order to help them resolve difficulties they experienced while reviewing aloud.
2 The untrained group: The untrained group, on the other hand, received no instruction on how to perform peer response and all
participated in a single think-aloud training session during which the researcher played a video downloaded from YouTube in
which think-aloud mechanism was visually demonstrated to the group members. This was followed by the participants’ think-aloud

Table 1
Demographic Characteristics of the Participants.
Group Name Gender Age English Experience (year) Proficiency Level

Jack M 25 9 Upper Intermediate


George M 25 10 Upper Intermediate
Trained Amelia F 24 10 Upper Intermediate
Betty F 24 10 Upper Intermediate
Nicole F 23 8 Upper Intermediate
Allen M 25 9 Upper Intermediate
Henry M 24 8 Upper Intermediate
Untrained Alice F 24 9 Upper Intermediate
Emma F 24 10 Upper Intermediate
Rose F 23 9 Upper Intermediate

4
A. Memari Hanjani Assessing Writing 49 (2021) 100530

Fig. 1. A diagrammatic representation of data collection procedure.

practice using a random passage. The misunderstandings, questions, and problems of group members were also addressed in this
session.

Step 3 (peer reviewing). The members of each group performed peer review activity at the same time and in the same venue and
both groups evaluated the same essays under similar conditions. However, the trained group evaluation session was held one day apart
from their untrained counterparts and they used the same peer review sheet as peer review training to assess the assigned papers. After
obtaining the student reviewers’ consent to record their think-alouds, each member of both groups was given two copies of anonymous
cause and effect essays (one focusing on causes and one focusing on effects) and was asked to assess the papers. The topics of testing
essays were “The Causes of Crime in Society” and “The Effects of an Unhealthy Diet” which had immediate relevance to participants’
experience and were composed by their fellow students. The papers contained both content & organization and linguistic errors. While
the trained group were recommended to follow the instructions they had received and the peer review sheet guidelines, the un-trained
group received minimal training on how to do the peer review and were recommended to follow the cause and effect essay conventions
discussed in step one while assessing the papers. While the participants were evaluating the essays, their voices were recorded. Both
group members were reminded to think-aloud while evaluating the essays and the researcher circled in the class making sure that the
students were on task and gently prompted them to share and verbalize their thought processes if they went quiet. At the end of the
peer evaluation sessions, the researcher collected the reviewed papers for analysis.
As it is contended by several scholars, Think-aloud protocols require the participants to verbalize their thoughts while performing a
task. Such methods help the researchers to investigate the cognitive processes underlying complex task performance and elicit rich
data on such internal thought processes (Bowles, 2010; Polio & Friedman, 2017; Salkind, 2010). While, it is frequently argued that
think-aloud can disrupt the cognitive processes of the participants, Ericsson and Simon (1993) stress that such verbalizations do not
change the course or structure of thought processes since the method relies on the verbalization of thoughts accessible in the par­
ticipants’ short term but not their long term memory.
Further, one of the concerns expressed regarding think-aloud protocols is the reactivity effect they may cause under certain cir­
cumstances (Bowles, 2010; Ericsson & Simon, 1993; Polio & Friedman, 2017). Having this issue in mind, the researcher did not notice
any evidence of latency or accuracy reactivity effects in the current study as both trained and untrained groups thought out loud when
performing the evaluation tasks. However, the absence of reactivity effects in this study does not necessarily mean that in other peer
review conditions where a group involved in think-loud is compared to a group performing the activity silently reactivity effects do not
exist. Indeed as Polio and Friedman (2017) state, any changes in the process of task performance can be due to a variety of factors
including the nature of the task. More precisely, any latency or accuracy reactivity effects in the current study can be attributed to the
instructions the trained group received during the earlier stages of the study compared to their untrained counterparts.

3.3. Data analysis

The collected data were analyzed in two phases and as follows:

A Audio-recordings. First the reviewers’ recorded think-alouds were analyzed. In total, student reviewers’ spent eight hours and
thirteen minutes evaluating two essays (one focusing on causes and one on effects). Following Strauss and Corbin (1998), an
inductive approach was adopted to analyze the recorded think-alouds. That is, first the reviewers’ voices were transcribed and

5
A. Memari Hanjani Assessing Writing 49 (2021) 100530

translated into English. Second, a taxonomy of the reviewers’ cognitive processes were created based on the insights gained from
the review of the literature (e.g., Memari Hanjani & Li, 2014a; de Guerrero & Villamil, 2000; Lin & Samuel, 2013). To serve that
purpose, the translations were read recursively and the data were broken down, examined, and compared, so that patterns and
major themes could emerge (open coding). An important decision at this stage was think-aloud data segmentation. Each comments
provided for errors discussed by the reviewers were referred to as a think-aloud segment. Next, the data were put back together in
new ways by making connections between a category and its sub-categories (axial coding). Then, a further analysis was conducted
to tally the frequency of each category (Appendix 2). Finally, representative mental processes employed by the reviewers were
extracted to support, illustrate, and clarify each category/sub-category. It should be noted that the categories were verified by
sharing 20 % of the data with an experienced colleague. Disagreements in coding were resolved through discussion, and the
preliminary set of coding categories was further refined.
B Essays. This phase of data analysis involved listening to the reviewers’ voices and cross checking the recoded data with the
evaluated essays. More precisely, this step comprised examining and tallying the reviewers’ thought processes regarding feedback
type– revision or non-revision orientation-, as well as evaluation focus- content and organization or language and mechanics-, and
feedback validity -accuracy or inaccuracy. For instance, to compare the revision and non-revision oriented comments provided by
the participants, the researcher first tallied the total number of reviewers’ comments and that of revision and non-revision ones.
Then, he calculated the percentages by dividing the revision and non-revision oriented comments by the total of reviewers’
comments separately. In addition, to determine the evaluation focus of the feedback provided by the reviewers, the total number or
local and global comments were tallied first. Next, the figures were divided by the total number or revision oriented comments
independently to compute the ratio of each group. Finally, to identify the validity of the feedback provided by the learners, the
researcher initially tallied the total number of accurate and inaccurate comments separately and divided the numbers by the total of
revision oriented comments to calculate the proportion of each category subsequently.

4. Results

The research question addressed the trained and un-trained reviewers’ cognitive processes concerning type, focus, and accuracy of
the comments by examining the think-aloud and written data. In what follows, the think-aloud data analysis results are presented:

4.1. Revision type

The reviewers’ thoughts were classified into two categories: revision-oriented versus non-revision-oriented. Revision-oriented
category is referred to student cognitive processes intended to judge as well as to deliver feedback and offer comments to the essays.
Non-revision oriented category, on the other hand, does not directly concern with evaluation or feedback delivery and are mainly used
to express reviewers’ general and vague comments, feelings and emotions, their intention to maintain the task and the challenges they
face weighing the essays they review. Table 2 compares the average revision/ non-revision comments provided by five trained and five
un-trained reviewers assessing cause and effect essays written by an anonymous student respectively.
As this table shows, while a tally of responses by the students suggests that on average 59.5 % (58 % of cause and 61 % of effect
essay) of the trained reviewers’ comments were revision oriented requiring the writers to fix, amend, change, or improve their texts,
only 30 % (29 % of cause and 31 % of effect essay) of the feedback produced by the un-trained reviewers had revision nature. This may
indicate the difference that three 30 min one-on-one training sessions along with peer review sheets can make in the nature of the
comments provided by trained learners compared to their un-trained counterparts.
Revision oriented feedback offered by reviewers included correction, criticism, confirmation, and explaining convention/rule
regarding the text they evaluated. Non-revision oriented feedback, on the other hand, encompassed processing, repetition, text
analysis, translation, prediction, expressing confusion, agreeing/disagreeing with the writer’s logics, and expressing personal opinion
while assessing the essays. Both revision and non-revision oriented feedback were delivered by members of both groups in their think-
alouds even though the frequency of employing them was variable not only between members of the groups compared, but also within
the members of the same group. In general, while correction was the most frequent, explaining rule/convention was the least frequent
revision oriented feedback used by both trained and un-trained groups. On the other hand, considering the no-revision oriented
feedback, repetition and translation were the most frequent and expressing opinion and prediction were the least frequent comments

Table 2
The frequency and proportion of revision oriented vs non-revision oriented feedback provided by the trained and un-trained reviewers evaluating a
cause and an effect essay.
Trained Un-trained

Essay Revision oriented Non-revision oriented Revision oriented Non-revision oriented

F P F P F P F P

Cause 46 58 % 33 42 % 15 29 % 36 71 %
Effect 52 61 % 34 39 % 19 31 % 42 69 %
Total 98 59.5 % 67 40.5 % 34 30 % 78 70 %

F: Frequency; P: Percentage.

6
A. Memari Hanjani Assessing Writing 49 (2021) 100530

provided by trained and un-trained groups respectively. In what follows, the participants’ cognitive processes which characterize such
feedback are presented:

4.1.1. Correction
The reviewers used correction to correct the grammatical mistakes, wrong/inappropriate lexical items, and spelling, punctuation,
or capitalization errors they noticed in the essays they evaluated. Evidence of think-aloud extracts during which the reviewers tried to
fix such mistakes is shown in the following examples:
“This sentence has a grammatical mistake. The term SITUATION should be in plural form.” Or, “The whole paragraph is just one
sentence and it is run-on.” Or, “The term BEING is misspelled.” Or, “The right collocation for crime is COMMIT not DO. We should say
COMMIT a CRIME.” Or, “The right preposition for PAY ATTENTION is TO.”

4.1.2. Criticism
It referred to identifying lack of an element or presence of a wrong element in the text and indicated the reviewer’s disapproval.
Criticism was either supplemented by providing instruction to improve the essay quality or without providing any solutions. Criticism
is illustrated in the following think-aloud extracts:
Providing solution: “The supporting sentence of the first paragraph is not relevant. It should have elaborated the role of the family
in committing crime in society.” Or, “The second topic sentence is very similar to the first topic sentence. The author could have written
it differently by providing some facts or statistics.”
Providing no solution: “The introduction lacks thesis statement.” Or, “The method of organization of ideas in the essay is not
clear.” Or, “The final thought of the essay didn’t look good.”

4.1.3. Confirmation
It involved approving what the writer had written in terms of content and organization. The following think-aloud episodes provide
some examples:
“The essay is well organized. It has one introduction, three supporting paragraphs, and one conclusion.” Or, “I like the conclusion more
than other paragraphs. It includes all the necessary elements like restated thesis, summary of the main points, and final thought.” Or,
“Transition words are used properly.”

4.1.4. Explaining convention/rule


This type of feedback indicated the reviewers’ attempt to provide a mini lesson regarding the essay writing conventions/rules based
on the instructions they had already received. The following think-aloud quotes contain this type of response:
“An introduction should begin with a motivator and a motivator can be a question, quotation, contrast, and facts or statistics.” Or, “The
conclusion should include restated topic, summary of the main points, and final thought.”

4.1.5. Processing
It was used when the reviewers tried to make decisions about the steps they needed to take and aimed at progressing the task.
Examples of this strategy used by the reviewers are shown in the following:
“I need to read the introduction again to understand if it has got a clear thesis statement.” Or, “Let’s read the whole essay first. Then, I
will check it for content and after that for mechanics.” Or, “Now let’s check the essay in terms of language and mechanics.”

4.1.6. Prediction
Sometimes the reviewers tried to predict what the next part/element of the essay would be. Examples of such behaviors are evident
in the following think-aloud extracts:
“Based on the overview provided in the introduction, the first body paragraph should be about FAMILY.” Or, “As I understand the first
body paragraph should be about EFFECT on BODY.”

4.1.7. Text analysis


It encompassed the reviewers’ general evaluative judgment on the presence or absence of textual elements or aspects of a written
text. In other words, it was different from criticism, or confirmation as it focused solely on detecting text structures. The use of text
analysis is illustrated in the following think-aloud extracts:
“The essay has one introduction, three body paragraphs, and one conclusion.” Or, “This sentence seems more like a background in­
formation and can’t be a motivator as it doesn’t include a question, quotation, contrast, and facts or statistics or even an anecdote.” Or,
“This seems to be the topic sentence of the first body paragraph.”

4.1.8. Expressing confusion


It was voiced when the reviewers expressed doubt about the accuracy of a textual or linguistic item they read due to their low
linguistics competence. Hence, they struggled to fix what they guessed might be wrong. This reaction is illustrated in the think-aloud
extracts which follow:

7
A. Memari Hanjani Assessing Writing 49 (2021) 100530

“I have no idea why these two sentences are written as a separate paragraphs.” Or, “I am not sure if comma can be used after
NOWADAYS.” Or, “I’m not sure if INVOLVE is a right choice in this sentence. I prefer INCLUDE. If I had a dictionary, I could double
check the difference in the usage of these terms.”

4.1.9. Expressing personal opinion


After reading and interpreting the writer’s message, some reviewers tried to expand it the way they preferred. Here are some
examples taken from reviewers’ think-aloud extracts:
“In my opinion, having a healthy diet is very important and it is very influential on one’s life.” Or, “In my opinion, the most important
cause of crime is unemployment. When people have jobs, they do not commit crime.”

4.1.10. Agreeing/disagreeing with the writer’s logics


Upon reading the writer’s opinion and comprehending it, the reviewers judged about its rationality. Consequently, they expressed
their approval or disapproval of what had been stated. This tendency is illustrated in the following think-aloud extracts:
“That’s exactly what the writer says. Parents are the kids’ role models.” Or, “It’s not right to say all people commit crime if they are in the
same situation. It doesn’t happen necessarily.”

4.1.11. Repetition
It encompassed re-reading the same part of the text or reproduction of one’s saying unintentionally in order to maintain the task or
understand the writer’s intention.

4.1.12. Translation
The reviewers frequently translated the text they read into their native language in order to understand the writer’s message.

4.2. Revision Focus

The reviewers’ cognitive processes extracted from the think-aloud data were also analyzed with respect to their focus. The com­
ments, as mentioned above, were classified into two categories: local and global. Local comments refer to the comments addressing
grammatical and punctuation errors as well as the inaccuracies in word choice. Global comments, on the other hand, refer to the ones
that address the content of writing, including the ideas expressed by the writer, overall organization, and coherence of writing. Table 3
shows a synopsis of the trained and un-trained reviewers’ evaluation focus assessing cause and effect essays.
As it can be seen, a tally of the reviewers’ revision focus indicated that 70.5 % (70 % of cause and 71 % of effect essay) of the trained
participants’ evaluations and 68 % (73 % of cause and 63 % of effect essay) of their un-trained counterparts addressed the local
mistakes. Alternatively, 29.5 % (30 % of cause and 29 % of effect essay) of the trained learners’ and 32 % (27 % of cause and 37 % of
effect essay) of the untrained learners’ assessment focus targeted global problems of the texts they checked. Based on the data, the
revision focus of both trained and untrained reviewers followed similar patterns and focus on local errors almost comprised two-third
of the comments. In order to discover the gap between the reviewers’ focus on local and global issues in terms of frequency and
percentage, three raters were asked to assess the texts which the reviewers had already evaluated. They weighed both the cause and the
effect essays individually and together and all agreed that while the sample cause essay included 54 (80 %) local mistakes and 15 (20
%) potential global problems which could be commented on, these figure were 68 (82 %) for local and 15 (18 %) for possible global
ones. Hence, it seems quite normal that in both the cause and the effect essays, the feedback focus of the trained and the un-trained
reviewers on local issues tripled those of global ones. However, in terms of frequency, both local and global comments provided by the
trained reviewers were three times as frequent as those of their un-trained counterparts (69 and 29 compared to 22 and 11
respectively).

4.3. Revision accuracy

Finally, to determine the extent to which the participants’ comments were accurate, all of the reviewers’ audio-recorded data were
listened to and cross checked with both the cause and the effect essays they had evaluated simultaneously. The researcher checked the

Table 3
The frequency and ratio of local/global feedback provided by the trained and un-trained reviewers evaluating a cause and an effect essay.
Trained Un-trained

Essay Local Global Local Global

F P F P F P F P

Cause 32 70 % 14 30 % 11 73 % 4 27 %
Effect 37 71 % 15 29 % 12 63 % 7 37 %
Total 69 70.5 % 29 29.5 % 22 68 % 11 32 %

Local: language and mechanics mistakes; Global: content and organization mistakes.

8
A. Memari Hanjani Assessing Writing 49 (2021) 100530

accuracy of the reviewers’ revision oriented comments considering the essay writing instructions and the peer review sheet (see
Appendix 1). Table 4 provides a summary of the average frequency and percentage of the accurate and inaccurate comments given by
both trained and untrained reviewers assessing the sample cause and effect essays.
As it is illustrated in Table 4, a tally of revision oriented comments of the participants revealed that on average 77.5 % of the trained
and 77 % of the untrained reviewers’ comments were valid; whereas, only 22.5 % and 23 % of the feedback offered by both groups
were inaccurate. In this respect, apparently the performance of both groups was similar indicating the inefficiency of the three 30 min
one-on-one researcher–reviewer instruction received by the trained group. However, by double checking the table we can notice that
the frequency of feedback provided by trained reviewers is three times that of their untrained peers. Hence, we can claim that training
could at least foster the reviewers’ concentration in locating errors.

5. Discussion

While different aspects of peer review training have been discussed in the literature, little is known about cognitive processes of
trained peer reviewers compared to their untrained counterparts in terms of type, focus, and quality of feedback in L2 context. This
case study therefore addressed this gap by comparing the potential effects of one-to-one researcher–reviewer training sessions on the
cognitive processes of the trained and untrained L2 peer reviewers; the type, focus, and accuracy of their comments; and the logic
behind their approach to feedback delivery practice by employing recorded data (think-aloud), and textual analyses (evaluated
essays).
The study produced interesting yet unsurprising findings. In terms of revision types, revision-oriented feedback produced by the
trained reviewers doubled that of untrained ones (60 % to 30 %). This shows the efficacy of one-to-one researcher–reviewer training
sessions as well as peer review sheets as they helped students stay on task offering more relevant, and specific feedback compared to
untrained participants whose primary purpose of evaluation was agreeing or disagreeing with what had been expressed by the writers,
expressing personal ideas, and providing vague, irrelevant comments. As the ultimate objective of evaluation is increasing the quality
of students’ essays, it can be claimed that the method incorporated in this study could boost the quality of the trained participants’ text
evaluations and consequently their precise comments could have the potential to better review receivers’ papers. Hence, peer eval­
uation can be used as an alternative/complement to conventional teachers’ feedback in higher level L2 writing classes to a great extent.
In this regard, our results corroborate the reports of other scholars (Memari Hanjani & Li, 2014a; Chang, 2015; Hu, 2005; Liou & Peng,
2009; Min, 2006) as they claim students can offer precise, relevant, and effective feedback provided that they acquire enough expertise
and skill in reviewing writing.
Another significant finding of this research is the higher ratio of local feedback compared to global comments in both groups. As it
was noticed, local feedback comprised about two-third of the total comments provided by the reviewers regardless of being exposed to
evaluation instruction or not. This figure may seem discouraging on the surface. However, the examination and tallying of local and
global mistakes in the cause and effect essays revealed that while they contained 59 and 68 local errors respectively, the potential
number of global mistakes was limited to 6 and 5 in each. So, in both texts local issues comprised more than 90 % of the total errors and
this could naturally affect the revision focus of the participants and it seems normal for the reviewers to focus more on local issues than
global ones. Further, the frequency of both local and global comments provided by the trained reviewers were three times higher than
their untrained peers (69 and 29 vs 29 and 11). This may indicate that training improved the reviewers’ awareness and concentration
as they evaluated the texts more carefully. Finally, as Ferris (2011) states, sometimes focusing on local errors is inevitable as they
impede the reviewers’ understanding of the text could it be teachers or students. Hence, in order to comprehend the message, they may
feel the urge to provide more feedback on language and the mechanics of the papers they evaluate. Our finding of higher percentage of
local comments in both trained and untrained reviewers’ evaluation activities, however, contradicts with Chang’s (2015), Min’s (2003,
2005, 2006) and Tseng’s (2007) findings who stressed that their students’ revision focus shifted from grammar and vocabulary to
content and organization post training. The possible reasons for such discrepancies could be (a) the length of training as in most of
these investigations the reviewers received extensive training both in and outside class or during the whole semester, (b) types of
essays evaluated by peer reviewers (e.g. narrative, process, and expository), (c) training focus since in cases the focus of training was
only on global aspects of papers, (d) different categorization schemes of feedback types, (e) statistically insignificant difference in the
global errors focus pre- and post-training, and (f) student writers’ freedom to use information from other sources to substantiate their
opinions and develop their essays (first and revised drafts) at home which gave them the chance to double-check their uncertainties
and consequently develop linguistically better quality papers.

Table 4
The frequency and percentage of accurate/inaccurate feedback provided by the trained and un-trained reviewers evaluating a cause and an effect
essay.
Trained Un-trained

Essay Accurate Inaccurate Accurate Inaccurate

F P F P F P F P

Cause 36 78 % 10 22 % 12 80 % 3 20 %
Effect 40 77 % 12 23 % 14 74 % 5 26 %
Total 76 77.5 % 22 22.5 % 26 77 % 8 23 %

9
A. Memari Hanjani Assessing Writing 49 (2021) 100530

Regarding the validity of comments provided by trained and untrained reviewers it was noticed that the ratio of the accurate
feedback was the same in both groups (77.5 % vs 77 %). Yet, the frequency of correct feedback provided by the trained reviewers was
three times as many as their untrained counterparts (76 vs 26). It seems safe to argue that higher quantity of feedback by the trained
reviewers could potentially increase the number of inaccurate cases; whereas, providing less feedback by the untrained participants
could increase the proportion of accurate comments as they were cautious not to take risk and only commented on distinct errors they
noticed and avoided commenting on what they doubted. Thus, even though the ratio of valid feedback in both groups was similar, the
higher frequency of correct comments by trained reviewers implies not only the efficacy of training, but also the growth of self-
confidence. Further, locating more errors seventy seven percent of which are valid can prompt a higher quality revised paper
which is the primary purpose of successful peer assessment. Considering the literature, this result corroborates the findings of previous
studies (Chang, 2015; Hu, 2005; Lam, 2010; Min, 2005, 2006) which highlighted the growth in quantity and quality of student re­
viewers’ comments due to the instruction they had received.
In sum, as the results of the present study are concerned, even three researcher-reviewer training sessions made a difference and the
L2 learners could provide effective peer feedback. However, this does not mean that one-off training session is adequate. Indeed,
instruction should be constant, systematic, focused, and tailored to the students’ weaknesses and strengths. Also, common problems of
peer evaluations should be highlighted and addressed during training sessions. For instance, some of the invalid comments provided by
the participants of this study originated from their linguistic incompetency which were underdeveloped in foundation courses such as
general English courses. To address such problems, training sessions can incorporate mini grammar lessons too which have been
identified by examining student reviewers’ cognitive processes during peer evaluation sessions.

6. Conclusion

This study has been the first attempt to compare the mental processes of trained and untrained peer reviewers regarding feedback
focus, type, and quality. The significance of this study is twofold; first, it attempts to provide a glimpse into cognitive processes of the
trained and untrained learners by analyzing their think-alouds during peer evaluation sessions. Another significance of this research
lies in its methodology. I employed both audio and textual analyses to present a more accurate picture of the reviewers’ behaviors
while evaluating their peers’ cause and effect essays.
The results of this investigation indicate that with careful planning, systematic execution, and timely individualized feedback, the
effect of peer review training can emerge. But it does not mean that L2 writing practitioners should go for a one-shot training session
because EFL students’ peer reviewing skills take time to develop. If time permits, peer review training should be extensive and con­
stant, with each training session based on information obtained from the previous one so that student reviewers can continue
improving their commenting skills and ultimately promoting the quality of the papers they evaluate. However, due to the use of
convenience availability sampling method and the small sample size, the findings and implications are not meant to be generalized
beyond the scope of this study. For those who are interested in verifying the findings reported in this study, further research involving
EFL learners with different proficiency, linguistic, and cultural backgrounds may seem necessary. Finally, even though the participants
of the study were homogeneous- all senior EFL students at bachelor’s program and upper-intermediate learners in terms of writing skill
and English proficiency –the evaluation performance of the two groups were not examined at the onset. Perhaps research employing
pre-test post-test control group design could further validate the findings in the future. Perhaps, a deeper understanding of L2 peer
reviewers’ mental processes and feedback performances enable educators to arrange more effective peer evaluation training sessions.

Appendix 1

10
A. Memari Hanjani Assessing Writing 49 (2021) 100530

11
A. Memari Hanjani Assessing Writing 49 (2021) 100530

12
A. Memari Hanjani Assessing Writing 49 (2021) 100530

13
A. Memari Hanjani Assessing Writing 49 (2021) 100530

Appendix 2

Trained Group (Focus on Cause)


George: Review Duration; 37 m 50s

References

Berg, B. C. (1999). The effects of trained peer response on ESL students’ revision types and writing quality. Journal of Second Language Writing, 8, 215–241. https://doi.
org/10.1016/S1060-3743(99)80115-5
Bowles, M. A. (2010). The think-aloud controversy in second language research. New York: Routledge.
Byrd, D. (2003). Practical tips for implementing peer editing tasks in the foreign language classroom. Foreign Language Annals, 26(3), 434–441. https://doi.org/
10.1111/j.1944-9720.2003.tb02125.x
Chang, C. Y. (2015). Teacher modeling on EFL reviewers’ audience-aware feedback and affectivity in L2 peer review. Assessing Writing, 25, 1–20. https://doi.org/
10.1016/j.asw.2015.04.001
de Guerrero, M. C. M., & Villamil, O. S. (2000). Activating the ZPD: Mutual scaffolding in L2 peer revision. Modern Language Journal, 84, 51–68. https://doi.org/
10.1111/0026-7902.00052
Diab, N. M. (2010). Effects of peer- versus self-editing on students’ revision of language errors in revised drafts. System, 38, 85–95. https://doi.org/10.1016/j.
system.2009.12.008
Diab, N. M. (2011). Assessing the relationship between different types of student feedback and the quality of revised writing. Assessing Writing, 16, 274–292. https://
doi.org/10.1016/j.asw.2011.08.001
Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data. Cambridge: MIT Press.
Ferris, D. (2011). Treatment of error in second language student writing (second edition). United States of America: University of Michigan Press.
Hu, G. (2005). Using peer review with Chinese ESL student writers. Language Teaching Research, 9, 321–342. https://doi.org/10.1191/1362168805lr169oa
Huang, M. C. (2004). The use of process writing and Internet technology in a Taiwanese college English wiring class: A focus on peer reviews (Doctoral dissertation). Available
from ProQuest Dissertations and Theses database (UMI No. 3242220).
Jacobs, G. M., Curtis, A., Braine, G., & Huang, S.-Y. (1998). Feedback on student writing: Taking the middle path. Journal of Second Language Writing, 7, 307–317.
https://doi.org/10.1016/S1060-3743(98)90019-4
Kamimura, T. (2006). Effects of peer feedback on EFL student writers at different levels of English proficiency: A Japanese context. TESL Canada Journal, 23(2), 12–39.
https://doi.org/10.18806/tesl.v23i2.53
Lam, R. (2010). A peer review training workshop: Coaching students to give and evaluate peer feedback. TESL Canada Journal, 27(2), 114–127. https://doi.org/
10.18806/tesl.v27i2.1052
Lin, S. S. P., & Samuel, M. (2013). Scaffolding during peer response sessions. Procedia-Social and Behavioral Sciences, 90, 737–744. https://doi.org/10.1016/j.
sbspro.2013.07.147
Liou, H. C., & Peng, Z. Y. (2009). Training effects on computer-mediated peer review. System, 37, 514–525. https://doi.org/10.1016/j.system.2009.01.005
Liu, J., & Hansen, J. (2002). Peer response in second language writing classrooms. Ann Arbor: University of Michigan Press.
Liu, J., & Sadler, R. W. (2003). The effects and affect of peer review in electronic versus traditional modes on L2 writing. Journal of English for Academic Purposes, 2,
193–227. https://doi.org/10.1016/S1475-1585(03)00025-0

14
A. Memari Hanjani Assessing Writing 49 (2021) 100530

Lundstrom, K., & Baker, W. (2009). To give is better than to receive: The benefits of peer review to the reviewer’s own writing. Journal of Second Language Writing, 18,
30–43. https://doi.org/10.1016/j.jslw.2008.06.002
Memari Hanjani, A. (2013). Peer review, collaborative revision, and genre in L2 writing. Unpublished doctoral dissertation. Exeter, United Kingdom: University of
Exeter.
Memari Hanjani, A. (2016). Collaborative revision in L2 writing: Learners‟ reflections. ELT Journal, 70(3), 269–307. https://doi.org/10.1093/elt/ccv053
Memari Hanjani, A., & Li, L. (2014a). Exploring L2 writers‟ collaborative revision interactions and their writing performance. System, 44, 101–114. https://doi.org/
10.1016/j.system.2014.03.004
Memari Hanjani, A., & Li, L. (2014b). EFL learners‟ written reflections on their experience of attending process genre-based, student-centred essay writing course. The
Asian Journal of Applied Linguistics, 1(2), 149–166.
Mendonca, C. O., & Johnson, K. E. (1994). Peer review negotiations: Revision activities in ESL writing instruction. TESOL Quarterly, 28(4), 745–769. https://doi.org/
10.2307/3587558
Min, H. (2003). Why peer comments fail? English Teaching and Learning, 27(3), 85–103.
Min, H. (2005). Training students to become successful peer reviewers. System, 33, 293–308. https://doi.org/10.1016/j.system.2004.11.003
Min, H. (2006). The effects of trained peer review on EFL students’ revision types and writing quality. Journal of Second Language Writing, 15, 118–141. https://doi.
org/10.1016/j.jslw.2006.01.003
Min, H. (2008). Reviewer stances and writer perceptions in EFL peer review training. English for Specific Purposes, 27, 285–305.
Min, H. (2016). Effect of teacher modeling and feedback on EFL students’ peer review skills in peer review training. Journal of Second Language Writing, 31, 43–57.
https://doi.org/10.1016/j.jslw.2016.01.004
Morra, A. M., & Romano, M. E. (2009). University students’ reactions to guided peer feedback and EAP compositions. Journal of College Literacy & Learning, 35, 19–30.
Panadero, E. (2016). Is it safe? Social, interpersonal, and human effects of peer assessment: A review and future directions. In G. T. L. Brown, & L. R. Harris (Eds.),
Handbook of social and human conditions in assessment (pp. 247–266). New York: Routledge.
Paulus, T. M. (1999). The effect of peer and teacher feedback on student writing. Journal of Second Language Writing, 8, 265–289. https://doi.org/10.1016/S1060-
3743(99)80117-9
Peng, Z. Y. (2007). A study of blogging for enhancement of EFL college students’ writing (Master’s thesis). Taiwan: National Ching-hua University.
Polio, C., & Friedman, D. A. (2017). Understanding, evaluating, and conducting second language writing research. New York: Routledge.
Rahimi, M. (2013). Is training student reviewers worth its while? A study of how training influences the quality of students’’’ feedback and writing. Language Teaching
Research, 17, 67–89. https://doi.org/10.1177/1362168812459151
Rollinson, P. (2005). Using peer feedback in ESL writing class. ELT Journal, 59(1), 23–30. https://doi.org/10.1093/elt/cci003
Salkind, N. J. (2010). Encyclopedia of research design (Vol. 1). California, USA: Sage Publications.
Strauss, A., & Corbin, J. (1998). Basics of qualitative research (2nd ed.). Newbury Park, CA: Sage.
Ting, M., & Qin, Y. (2010). A case study of peer feedback in a Chinese EFL writing classroom. Chinese Journal of Applied Linguistics, 33(4), 87–98.
Tseng, W. J. (2007). Using peer feedback in revision: Taiwanese university students’ English writing (Master’s thesis). National Ping-tung University of Education.
Tsui, A. B. M., & Ng, M. (2000). Do secondary L2 writers benefit from peer comments? Journal of Second Language Writing, 9(2), 147–170. https://doi.org/10.1016/
S1060-3743(00)00022-9
Villamil, O. S., & De Guerrero, M. C. M. (1996). Peer revision in the L2 classroom: Social-cognitive activities, mediating strategies, and aspects of social behavior.
Journal of Second Language Writing, 5(1), 51–75. https://doi.org/10.1016/S1060-3743(96)90015-6
Villamil, O. S., & De Guerrero, M. C. M. (1998). Assessing the impact of peer revision on L2 writing. Applied Linguistics, 19(4), 491–514. https://doi.org/10.1093/
applin/19.4.491
Wang, W. (2014). Students’ perceptions of rubric-referenced peer feedback on EFL writing: A longitudinal inquiry. Assessing Writing, 19, 80–96. https://doi.org/
10.1016/j.asw.2013.11.008
Yang, Y. F., & Meng, W. T. (2013). The effects of online feedback training on students’ text revision. Language Learning & Technology, 17, 220–238.
Yang, M., Badger, R., & Zhen, Y. (2006). A comparative study of peer and teacher feedback in Chinese EFL writing class. Journal of Second Language Writing, 15,
179–200. https://doi.org/10.1016/j.jslw.2006.09.004
Yu, S. (2020). Giving genre-based peer feedback in academic writing: Sources of knowledge and skills, difficulties and challenges. Assessment and Evaluation in Higher
Education, 1–17. https://doi.org/10.1080/02602938.2020.1742872
Yu, S., & Hu, G. (2017). Understanding university students’ peer feedback practices in EFL writing: Insights from a case study. Assessing Writing, 33, 25–35. https://doi.
org/10.1016/j.asw.2017.03.004
Zhao, H. (2010). Investigating learners’ use and understanding of peer and teacher feedback on writing: A comparative study in a Chinese English writing classroom.
Assessing Writing, 15(1), 3–17. https://doi.org/10.1016/j.asw.2010.01.002
Zheng, C. (2012). Understanding the learning process of peer feedback activity: An ethnographic study of Exploratory Practice. Language Teaching Research, 16,
109–126. https://doi.org/10.1177%2F1362168811426248.
Zhu, W. (2001). Interaction and feedback in mixed peer response groups. Journal of Second Language Writing, 10, 251–276. https://doi.org/10.1016/S1060-3743(01)
00043-1
Zhu, W. (1995). Effects of training for peer response on students’ comments and interaction. Written Communication, 12, 492–528. https://doi.org/10.1177/
0741088395012004004
Zhu, Q., & Carless, D. (2018). Dialogue within peer feedback processes: Clarification and negotiation of meaning. Higher Education Research & Development, 37(4),
883–897. https://doi.org/10.1080/07294360.2018.1446417
Zhu, W., & Mitchell, D. A. (2012). Participation in peer response as activity: An examination of peer response stances from an Activity Theory perspective. TESOL
Quarterly, 46(2), 362–386. https://doi.org/10.1002/tesq.22

Alireza Memari Hanjani is a TESOL lecturer at Islamic Azad University, Islamshahr Branch in Iran. He is a PhD graduate of the University of Exeter and has taught a
wide range of English courses to non-native learners in Iran and the UK for more than 18 years. His research interests include qualitative and quantitative research,
cooperative and collaborative learning, peer collaboration, student-centred pedagogy, autonomous learning, feedback and error correction in writing, and peer eval­
uation. He is currently doing some research in these areas.

15

You might also like